A 1981 study of worker safety and health training in the industrial nations begins by quoting the French writer Victor Hugo: “No cause can succeed without first making education its ally” (Heath 1981). This observation surely still applies to occupational safety and health in the late twentieth century, and is relevant to organization personnel at all levels.
As the workplace becomes increasingly complex, new demands have arisen for greater understanding of the causes and means of prevention of accidents, injuries and illnesses. Government officials, academics, management and labour all have important roles to play in conducting the research that furthers this understanding. The critical next step is the effective transmission of this information to workers, supervisors, managers, government inspectors and safety and health professionals. Although education for occupational physicians and hygienists differs in many respects from the training of workers on the shop floor, there are also common principles that apply to all.
National education and training policies and practices will of course vary according to the economic, political, social, cultural and technological background of the country. In general, industrially advanced nations have proportionally more specialized occupational safety and health practitioners at their disposal than do the developing nations, and more sophisticated education and training programmes are available to these trained workers. More rural and less industrialized nations tend to rely more on “primary health care workers”, who may be worker representatives in factories or fields or health personnel in district health centres. Clearly, training needs and available resources will vary greatly in these situations. However, they all have in common the need for trained practitioners.
This article provides an overview of the most significant issues concerning education and training, including target audiences and their needs, the format and content of effective training and important current trends in the field.
In 1981, the Joint ILO/WHO Committee on Occupational Health identified the three levels of education required in occupational health, safety and ergonomics as (1) awareness, (2) training for specific needs and (3) specialization. These components are not separate, but rather are part of a continuum; any person may require information on all three levels. The main target groups for basic awareness are legislators, policy makers, managers and workers. Within these categories, many people require additional training in more specific tasks. For example, while all managers should have a basic understanding of the safety and health problems within their areas of responsibility and should know where to go for expert assistance, managers with specific responsibility for safety and health and compliance with regulations may need more intensive training. Similarly, workers who serve as safety delegates or members of safety and health committees need more than awareness training alone, as do government administrators involved in factory inspection and public health functions related to the workplace.
Those doctors, nurses and (especially in rural and developing areas) nonphysician primary health care workers whose primary training or practice does not include occupational medicine will need occupational health education in some depth in order to serve workers, for example by being able to recognize work-related illnesses. Finally, certain professions (for example, engineers, chemists, architects and designers) whose work has considerable impact on workers’ safety and health need much more specific education and training in these areas than they traditionally receive.
Specialists require the most intensive education and training, most often of the kind received in undergraduate and postgraduate programmes of study. Physicians, nurses, occupational hygienists, safety engineers and, more recently, ergonomists come under this category. With the rapid ongoing developments in all of these fields, continuing education and on-the-job experience are important components of the education of these professionals.
It is important to emphasize that increasing specialization in the fields of occupational hygiene and safety has taken place without a commensurate emphasis on the interdisciplinary aspects of these endeavours. A nurse or physician who suspects that a patient’s disease is work-related may well need the assistance of an occupational hygienist to identify the toxic exposure (for example) in the workplace that is causing the health problem. Given limited resources, many companies and governments often employ a safety specialist but not a hygienist, requiring that the safety specialist address health as well as safety concerns. The interdependence of safety and health issues should be addressed by offering interdisciplinary training and education to safety and health professionals.
Why Training and Education?
The primary tools needed to achieve the goals of reducing occupational injuries and illnesses and promoting occupational safety and health have been characterized as the “three E’s”—engineering, enforcement and education. The three are interdependent and receive varying levels of emphasis within different national systems. The overall rationale for training and education is to improve awareness of safety and health hazards, to expand knowledge of the causes of occupational illness and injury and to promote the implementation of effective preventive measures. The specific purpose and impetus for training will, however, vary for different target audiences.
Middle and upper level managers
The need for managers who are knowledgeable about the safety and health aspects of the operations for which they are responsible is more widely acknowledged today than heretofore. Employers increasingly recognize the considerable direct and indirect costs of serious accidents and the civil, and in some jurisdictions, criminal liability to which companies and individuals may be exposed. Although belief in the “careless worker” explanation for accidents and injuries remains prevalent, there is increasing recognition that “careless management” can be cited for conditions under its control that contribute to accidents and disease. Finally, firms also realize that poor safety performance is poor public relations; major disasters like the one in the Union Carbide plant in Bhopal (India) can offset years of effort to build a good name for a company.
Most managers are trained in economics, business or engineering and receive little or no instruction during their formal education in occupational health or safety matters. Yet daily management decisions have a critical impact on employee safety and health, both directly and indirectly. To remedy this state of affairs, safety and health concerns have begun to be introduced into management and engineering curricula and into continuing education programmes in many countries. Further efforts to make safety and health information more widespread is clearly necessary.
Research has demonstrated the central role played by first-line supervisors in the accident experience of construction employers (Samelson 1977). Supervisors who are knowledgeable about the safety and health hazards of their operations, who effectively train their crew members (especially new employees) and who are held accountable for their crew’s performance hold the key to improving conditions. They are the critical link between workers and the firm’s safety and health policies.
Law, custom and current workplace trends all contribute to the spread of employee education and training. Increasingly, employee safety and health training is being required by government regulations. Some apply to general practice, while in others the training requirements are related to specific industries, occupations or hazards. Although valid evaluation data on the effectiveness of such training as a countermeasure to work-related injuries and illnesses are surprisingly sparse (Vojtecky and Berkanovic 1984-85); nevertheless the acceptance of training and education for improving safety and health performance in many areas of work is becoming widespread in many countries and companies.
The growth of employee participation programmes, self-directed work teams and shop floor responsibility for decision-making has affected the way in which safety and health approaches are taken as well. Education and training are widely used to enhance knowledge and skills at the level of the line worker, who is now recognized as essential for the effectiveness of these new trends in work organization. A beneficial action employers can take is to involve employees early on (for example, in the planning and design stages when new technologies are introduced into a worksite) to minimize and to anticipate adverse effects on the work environment.
Trade unions have been a moving force both in advocating more and better training for employees and in developing and delivering curricula and materials to their members. In many countries, safety committee members, safety delegates and works council representatives have assumed a growing role in the resolution of hazard problems at the worksite and in inspection and advocacy as well. Persons holding these positions all require training that is more complete and sophisticated than that given to an employee performing a particular job.
Safety and health professionals
The duties of safety and health personnel comprise a broad range of activities that differ widely from one country to another and even within a single profession. Included in this group are physicians, nurses, hygienists and safety engineers either engaged in independent practice or employed by individual worksites, large corporations, government health or labour inspectorates and academic institutions. The demand for trained professionals in the area of occupational safety and health has grown rapidly since the 1970s with the proliferation of government laws and regulations paralleling the growth of corporate safety and health departments and academic research in this field.
Scope and Objectives of Training and Education
This ILO Encyclopaedia itself presents the multitude of issues and hazards that must be addressed and the range of personnel required in a comprehensive safety and health programme. Taking the large view, we can consider the objectives of training and education for safety and health in a number of ways. In 1981, the Joint ILO/WHO Committee on Occupational Health offered the following categories of educational objectives which apply in some degree to all of the groups discussed thus far: (1) cognitive (knowledge), (2) psychomotor (professional skills) and (3) affective (attitude and values). Another framework describes the “information–education–training” continuum, roughly corresponding to the “what”, the “why” and the “how” of hazards and their control. And the “empowerment education” model, to be discussed below, puts great emphasis on the distinction between training—the teaching of competency-based skills with predictable behavioural outcomes—and education—the development of independent critical thinking and decision-making skills leading to effective group action (Wallerstein and Weinger 1992).
Workers need to understand and apply the safety procedures, proper tools and protective equipment for performing specific tasks as part of their job skills training. They also require training in how to rectify hazards that they observe and to be familiar with internal company procedures, in accordance with the safety and health laws and regulations which apply to their area of work. Similarly, supervisors and managers must be aware of the physical, chemical and psychosocial hazards present in their workplaces as well as the social, organizational and industrial relations factors that may be involved in the creation of these hazards and in their correction. Thus, gaining knowledge and skills of a technical nature as well as organizational, communication and problem-solving skills are all necessary objectives in education and training.
In recent years, safety and health education has been influenced by developments in education theory, particularly theories of adult learning. There are different aspects of these developments, such as empowerment education, cooperative learning and participative learning. All share the principle that adults learn best when they are actively involved in problem-solving exercises. Beyond the transmission of specific bits of knowledge or skills, effective education requires the development of critical thinking and an understanding of the context of behaviours and ways of linking what is learned in the classroom to action in the workplace. These principles seem especially appropriate to workplace safety and health, where the causes of hazardous conditions and illnesses and injuries are often a combination of environmental and physical factors, human behaviour and the social context.
In translating these principles into an education programme four categories of objectives must be included:
Information objectives: the specific knowledge that trainees will acquire. For example, knowledge of the effects of organic solvents on the skin and on the central nervous system.
Behavioural objectives: the competencies and skills that workers will learn. For example, the ability to interpret chemical data sheets or to lift a heavy object safely.
Attitude objectives: the beliefs that interfere with safe performance or with response to training that must be addressed. The belief that accidents are not preventable or that “solvents can’t hurt me because I’ve worked with them for years and I’m fine” are examples.
Social action objectives: the ability to analyse a specific problem, identify its causes, propose solutions and plan and take action steps to resolve it. For example, the task of analysing a particular job where several people have sustained back injuries, and of proposing ergonomic modifications, requires the social action of changing the organization of work through labour-management cooperation.
Technological and Demographic Change
Training for awareness and management of specific safety and health hazards obviously depends on the nature of the workplace. While some hazards remain relatively constant, the changes that take place in the nature of jobs and technologies require continuous updating of training needs. Falls from heights, falling objects and noise, for example, have always been and will continue to be prominent hazards in the construction industry, but the introduction of many kinds of new synthetic building materials necessitates additional knowledge and awareness concerning their potential for adverse health effects. Similarly, unguarded belts, blades and other danger points on machinery remain common safety hazards but the introduction of industrial robots and other computer-controlled devices requires training in new types of machinery hazards.
With rapid global economic integration and the mobility of multinational corporations, old and new occupational hazards frequently exist side-by-side in both highly industrialized and developing countries. In an industrializing country sophisticated electronics manufacturing operations may be located next door to a metal foundry that still relies on low technology and the heavy use of manual labour. Meanwhile, in industrialized countries, garment sweatshops with miserable safety and health conditions, or lead battery recycling operations (with its threat of lead toxicity) continue to exist alongside highly automated state-of-the-art industries.
The need for continual updating of information applies as much to workers and managers as it does to occupational health professionals. Inadequacies in the training even of the latter is evidenced by the fact that most occupational hygienists educated in the 1970s received scant training in ergonomics; and even though they received extensive training in air monitoring, it was applied almost exclusively to industrial worksites. But the single largest technological innovation affecting millions of workers since that time is the widespread introduction of computer terminals with visual display units (VDUs). Ergonomic evaluation and intervention to prevent musculoskeletal and vision problems among VDU users was unheard of in the 1970s; by the mid-nineties, VDU hazards have become a major concern of occupational hygiene. Similarly, the application of occupational hygiene principles to indoor air quality problems (to remedy “tight/sick building syndrome”, for example) has required a great deal of continuing education for hygienists accustomed only to evaluating factories. Psychosocial factors, also largely unrecognized as occupational health hazards before the 1980s, play an important role in the treatment of VDU and indoor air hazards, and of many others as well. All parties investigating such health problems need education and training in order to understand the complex interactions among environment, the individual and social organization in these settings.
The changing demographics of the workforce must also be considered in safety and health training. Women make up an increasing proportion of the workforce in both developed and developing nations; their health needs in and out of the workplace must be addressed. The concerns of immigrant workers raise numerous new training questions, including those to do with language, although language and literacy issues are certainly not limited to immigrant workers: varying literacy levels among native-born workers must also be considered in the design and delivery of training. Older workers are another group whose needs must be studied and incorporated into education programmes as their numbers increase in the working population of many nations.
Training Venues and Providers
The location of training and education programmes is determined by the audience, the purpose, the content, the duration of the programme and, to be realistic, the resources available in the country or region. The audience for safety and health education starts with schoolchildren, trainees and apprentices, and extends to workers, supervisors, managers and safety and health professionals.
Training in schools
Incorporation of safety and health education into elementary and secondary education, and especially in vocational and technical training schools, is a growing and very positive trend. The teaching of hazard recognition and control as a regular part of skills training for particular occupations or trades is far more effective than trying to impart such knowledge later, when the worker has been in the trade for a period of years, and has already developed set practices and behaviours. Such programmes, of course, necessitate that the teachers in these schools also be trained to recognize hazards and apply preventive measures.
On-the-job training at the worksite is appropriate for workers and supervisors facing specific hazards found onsite. If the training is of significant length, a comfortable classroom facility within the worksite is strongly recommended. In cases where locating the training at the workplace may intimidate workers or otherwise discourage their full participation in the class, an offsite venue is preferable. Workers may feel more comfortable in a union setting where the union plays a major role in designing and delivering the programme. However, field visits to actual work locations which illustrate the hazards in question are always a positive addition to the course.
Training of safety delegates and committee members
The longer and more sophisticated training recommended for safety delegates and committee representatives is often delivered at specialized training centres, universities or commercial facilities. More and more efforts are being made to implement regulatory requirements for training and certification of workers who are to perform in certain hazardous fields such as asbestos abatement and hazardous waste handling. These courses usually include both classroom and hands-on sessions, where actual performance is simulated and specialized equipment and facilities are required.
Providers of onsite and offsite programmes for workers and safety representatives include government agencies, tripartite organizations like the ILO or analogous national or sub-national bodies, business associations and labour unions, universities, professional associations and private training consultants. Many governments provide funds for the development of safety and health training and education programmes targeted at specific industries or hazards.
Academic and professional training
The training of safety and health professionals varies widely among countries, depending on the needs of the working population and the country’s resources and structures. Professional training is centred in undergraduate and postgraduate university programmes, but these vary in availability in different parts of the world. Degree programmes may be offered for specialists in occupational medicine and nursing and occupational health may be incorporated into the training of general practitioners and of primary care and public health nurses. The number of degree-granting programmes for occupational hygienists has increased dramatically. However, there remains a strong demand for short courses and less comprehensive training for hygiene technicians, many of whom have received their basic training on the job in particular industries.
There is an acute need for more trained safety and health personnel in the developing world. While more university-trained and credentialed physicians, nurses and hygienists will undoubtedly be welcomed in these countries, it is nonetheless realistic to expect that many health services will continue to be delivered by primary health care workers. These people need training in the relationship between work and health, in the recognition of the major safety and health risks associated with the type of work carried on in their region, in basic survey and sampling techniques, in the use of the referral network available in their region for suspected cases of occupational illness and in health education and risk communication techniques (WHO1988).
Alternatives to university-based degree programmes are critically important to professional training in both developing and industrialized nations, and would include continuing education, distance education, on-the-job training and self-training, among others.
Education and training cannot solve all occupational safety and health problems, and care must be taken that the techniques learned in such programmes are in fact applied appropriately to the identified needs. They are, however, critical components of an effective safety and health programme when employed in conjunction with engineering and technical solutions. Cumulative, interactive and continuous learning is essential to prepare our rapidly changing work environments to meet the needs of workers, especially as regards the prevention of debilitating injuries and illnesses. Those who labour in the workplace as well as those who provide support from the outside need the most up-to-date information available and the skills to put this information to use in order to protect and promote worker health and safety.
Whereas the principles and methods of risk assessment for non-carcinogenic chemicals are similar in different parts of the world, it is striking that approaches for risk assessment of carcinogenic chemicals vary greatly. There are not only marked differences between countries, but even within a country different approaches are applied or advocated by various regulatory agencies, committees and scientists in the field of risk assessment. Risk assessment for non-carcinogens is rather consistent and pretty well established partly because of the long history and better understanding of the nature of toxic effects in comparison with carcinogens and a high degree of consensus and confidence by both scientists and the general public on methods used and their outcome.
For non-carcinogenic chemicals, safety factors were introduced to compensate for uncertainties in the toxicology data (which are derived mostly from animal experiments) and in their applicability to large, heterogeneous human populations. In doing so, recommended or required limits on safe human exposures were usually set at a fraction (the safety or uncertainty factor approach) of the exposure levels in animals that could be clearly documented as the no observed adverse effects level (NOAEL) or the lowest observed adverse effects level (LOAEL). It was then assumed that as long as human exposure did not exceed the recommended limits, the hazardous properties of chemical substances would not be manifest. For many types of chemicals, this practice, in somewhat refined form, continues to this day in toxicological risk assessment.
During the late 1960s and early 1970s regulatory bodies, starting in the United States, were confronted with an increasingly important problem for which many scientists considered the safety factor approach to be inappropriate, and even dangerous. This was the problem with chemicals that under certain conditions had been shown to increase the risk of cancers in humans or experimental animals. These substances were operationally referred to as carcinogens. There is still debate and controversy on the definition of a carcinogen, and there is a wide range of opinion about techniques to identify and classify carcinogens and the process of cancer induction by chemicals as well.
The initial discussion started much earlier, when scientists in the 1940s discovered that chemical carcinogens caused damage by a biological mechanism that was of a totally different kind from those that produced other forms of toxicity. These scientists, using principles from the biology of radiation-induced cancers, put forth what is referred to as the “non-threshold” hypothesis, which was considered applicable to both radiation and carcinogenic chemicals. It was hypothesized that any exposure to a carcinogen that reaches its critical biological target, especially the genetic material, and interacts with it, can increase the probability (the risk) of cancer development.
Parallel to the ongoing scientific discussion on thresholds, there was a growing public concern on the adverse role of chemical carcinogens and the urgent need to protect the people from a set of diseases collectively called cancer. Cancer, with its insidious character and long latency period together with data showing that cancer incidences in the general population were increasing, was regarded by the general public and politicians as a matter of concern that warranted optimal protection. Regulators were faced with the problem of situations in which large numbers of people, sometimes nearly the entire population, were or could be exposed to relatively low levels of chemical substances (in consumer products and medicines, at the workplace as well as in air, water, food and soils) that had been identified as carcinogenic in humans or experimental animals under conditions of relatively intense exposures.
Those regulatory officials were confronted with two fundamental questions which, in most cases, could not be fully answered using available scientific methods:
Regulators recognized the need for assumptions, sometimes scientifically based but often also unsupported by experimental evidence. In order to achieve consistency, definitions and specific sets of assumptions were adapted that would be generically applied to all carcinogens.
Carcinogenesis Is a Multistage Process
Several lines of evidence support the conclusion that chemical carcinogenesis is a multistage process driven by genetic damage and epigenetic changes, and this theory is widely accepted in the scientific community all over the world (Barrett 1993). Although the process of chemical carcinogenesis is often separated into three stages—initiation, promotion and progression—the number of relevant genetic changes is not known.
Initiation involves the induction of an irreversibly altered cell and is for genotoxic carcinogens always equated with a mutational event. Mutagenesis as a mechanism of carcinogenesis was already hypothesized by Theodor Boveri in 1914, and many of his assumptions and predictions have subsequently been proven to be true. Because irreversible and self-replicating mutagenic effects can be caused by the smallest amount of a DNA-modifying carcinogen, no threshold is assumed. Promotion is the process by which the initiated cell expands (clonally) by a series of divisions, and forms (pre)neoplastic lesions. There is considerable debate as to whether during this promotion phase initiated cells undergo additional genetic changes.
Finally in the progression stage “immortality” is obtained and full malignant tumours can develop by influencing angiogenesis, escaping the reaction of the host control systems. It is characterized by invasive growth and frequently metastatic spread of the tumour. Progression is accompanied by additional genetic changes due to the instability of proliferating cells and selection.
Therefore, there are three general mechanisms by which a substance can influence the multistep carcinogenic process. A chemical can induce a relevant genetic alteration, promote or facilitate clonal expansion of an initiated cell or stimulate progression to malignancy by somatic and/or genetic changes.
Risk Assessment Process
Risk can be defined as the predicted or actual frequency of occurrence of an adverse effect on humans or the environment, from a given exposure to a hazard. Risk assessment is a method of systematically organizing the scientific information and its attached uncertainties for description and qualification of the health risks associated with hazardous substances, processes, actions or events. It requires evaluation of relevant information and selection of the models to be used in drawing inferences from that information. Further, it requires explicit recognition of uncertainties and appropriate acknowledgement that alternative interpretation of the available data may be scientifically plausible. The current terminology used in risk assessment was proposed in 1984 by the US National Academy of Sciences. Qualitative risk assessment changed into hazard characterization/identification and quantitative risk assessment was divided into the components dose-response, exposure assessment and risk characterization.
In the following section these components will be briefly discussed in view of our current knowledge of the process of (chemical) carcinogenesis. It will become clear that the dominant uncertainty in the risk assessment of carcinogens is the dose-response pattern at low dose levels characteristic for environmental exposure.
This process identifies which compounds have the potential to cause cancer in humans—in other words it identifies their intrinsic genotoxic properties. Combining information from various sources and on different properties serves as a basis for classification of carcinogenic compounds. In general the following information will be used:
Classification of chemicals into groups based on the assessment of the adequacy of the evidence of carcinogenesis in animals or in man, if epidemiological data are available, is a key process in hazard identification. The best known schemes for categorizing carcinogenic chemicals are those of IARC (1987), EU (1991) and the EPA (1986). An overview of their criteria for classification (e.g., low-dose extrapolation methods) is given in table 1.
Table 1. Comparison of low-dose extrapolations procedures
|Current US EPA||Denmark||EEC||UK||Netherlands||Norway|
|Genotoxic carcinogen||Linearized multistage procedure using most appropriate low-dose model||MLE from 1- and 2-hit models plus judgement of best outcome||No procedure specified||No model, scientific expertise and judgement from all available data||Linear model using TD50 (Peto method) or “Simple Dutch Method” if no TD50||No procedure specified|
|Non-genotoxic carcinogen||Same as above||Biologically-based model of Thorslund or multistage or Mantel-Bryan model, based on tumour origin and dose-response||Use NOAEL and safety factors||Use NOEL and safety factors to set ADI||Use NOEL and safety factors to set ADI|
One important issue in classifying carcinogens, with sometimes far-reaching consequences for their regulation, is the distinction between genotoxic and non-genotoxic mechanisms of action. The US Environmental Protection Agency (EPA) default assumption for all substances showing carcinogenic activity in animal experiments is that no threshold exists (or at least none can be demonstrated), so there is some risk with any exposure. This is com- monly referred to as the non-threshold assumption for genotoxic (DNA-damaging) compounds. The EU and many of its members, such as the United Kingdom, the Netherlands and Denmark, make a distinction between carcinogens that are genotoxic and those believed to produce tumours by non-genotoxic mechanisms. For genotoxic carcinogens quantitative dose-response estimation procedures are followed that assume no threshold, although the procedures might differ from those used by the EPA. For non-genotoxic substances it is assumed that a threshold exists, and dose-response procedures are used that assume a threshold. In the latter case, the risk assessment is generally based on a safety factor approach, similar to the approach for non-carcinogens.
It is important to keep in mind that these different schemes were developed to deal with risk assessments in different contexts and settings. The IARC scheme was not produced for regulatory purposes, although it has been used as a basis for developing regulatory guidelines. The EPA scheme was designed to serve as a decision point for entering quantitative risk assessment, whereas the EU scheme is currently used to assign a hazard (classification) symbol and risk phrases to the chemical's label. A more extended discussion on this subject is presented in a recent review (Moolenaar 1994) covering procedures used by eight governmental agencies and two often-cited independent organizations, the Inter- national Agency for Research on Cancer (IARC) and the American Conference of Governmental Industrial Hygienists (ACGIH).
The classification schemes generally do not take into account the extensive negative evidence that may be available. Also, in recent years a greater understanding of the mechanism of action of carcinogens has emerged. Evidence has accumulated that some mechanisms of carcinogenicity are species-specific and are not relevant for man. The following examples will illustrate this important phenomenon. First, it has been recently demonstrated in studies on the carcinogenicity of diesel particles, that rats respond with lung tumours to a heavy loading of the lung with particles. However, lung cancer is not seen in coal miners with very heavy lung burdens of particles. Secondly, there is the assertion of the nonrelevance of renal tumours in the male rat on the basis that the key element in the tumourgenic response is the accumulation in the kidney of α-2 microglobulin, a protein that does not exist in humans (Borghoff, Short and Swenberg 1990). Disturbances of rodent thyroid function and peroxisome proliferation or mitogenesis in the mouse liver have also to be mentioned in this respect.
This knowledge allows a more sophisticated interpretation of the results of a carcinogenicity bioassay. Research towards a better understanding of the mechanisms of action of carcinogenicity is encouraged because it may lead to an altered classification and to the addition of a category in which chemicals are classified as not carcinogenic to humans.
Exposure assessment is often thought to be the component of risk assessment with the least inherent uncertainty because of the ability to monitor exposures in some cases and the availability of relatively well-validated exposure models. This is only partially true, however, because most exposure assessments are not conducted in ways that take full advantage of the range of available information. For that reason there is a great deal of room for improving exposure distribution estimates. This holds for both external as well as for internal exposure assessments. Especially for carcinogens, the use of target tissue doses rather than external exposure levels in modelling dose-response relationships would lead to more relevant predictions of risk, although many assumptions on default values are involved. Physiologically based pharmacokinetic (PBPK) models to determine the amount of reactive metabolites that reaches the target tissue are potentially of great value to estimate these tissue doses.
The dose level or exposure level that causes an effect in an animal study and the likely dose causing a similar effect in humans is a key consideration in risk characterization. This includes both dose-response assessment from high to low dose and interspecies extrapolation. The extrapolation presents a logical problem, namely that data are being extrapolated many orders of magnitude below the experimental exposure levels by empirical models that do not reflect the underlying mechanisms for carcinogenicity. This violates a basic principle in fitting of empirical models, namely not to extrapolate outside the range of the observable data. Therefore, this empirical extrapolation results in large uncertainties, both from a statistical and from a biological point of view. At present no single mathematical procedure is recognized as the most appropriate one for low-dose extrapolation in carcinogenesis. The mathematical models that have been used to describe the relation between the administered external dose, the time and the tumour incidence are based on either tolerance-distribution or mechanistic assumptions, and sometimes based on both. A summary of the most frequently cited models (Kramer et al. 1995) is listed in table 2.
Table 2. Frequently cited models in carcinogen risk characterization
|Tolerance distribution models||Mechanistic models|
|Hit-models||Biologically based models|
|Probit||Multihit||Cohen and Ellwein|
|Gamma Multihit||Linearized Multistage,|
1 Time-to-tumour models.
These dose-response models are usually applied to tumour-incidence data corresponding to only a limited number of experimental doses. This is due to the standard design of the applied bioassay. Instead of determining the complete dose-response curve, a carcinogenicity study is in general limited to three (or two) relatively high doses, using the maximum tolerated dose (MTD) as highest dose. These high doses are used to overcome the inherent low statistical sensitivity (10 to 15% over background) of such bioassays, which is due to the fact that (for practical and other reasons) a relatively small number of animals is used. Because data for the low-dose region are not available (i.e., cannot be determined experimentally), extrapolation outside the range of observation is required. For almost all data sets, most of the above-listed models fit equally well in the observed dose range, due to the limited number of doses and animals. However, in the low-dose region these models diverge several orders of magnitude, thereby introducing large uncertainties to the risk estimated for these low exposure levels.
Because the actual form of the dose-response curve in the low-dose range cannot be generated experimentally, mechanistic insight into the process of carcinogenicity is crucial to be able to discriminate on this aspect between the various models. Comprehensive reviews discussing the various aspects of the different mathematical extrapolation models are presented in Kramer et al. (1995) and Park and Hawkins (1993).
Besides the current practice of mathematical modelling several alternative approaches have been proposed recently.
Biologically motivated models
Currently, the biologically based models such as the Moolgavkar-Venzon-Knudson (MVK) models are very promising, but at present these are not sufficiently well advanced for routine use and require much more specific information than currently is obtained in bioassays. Large studies (4,000 rats) such as those carried out on N-nitrosoalkylamines indicate the size of the study which is required for the collection of such data, although it is still not possible to extrapolate to low doses. Until these models are further developed they can be used only on a case-by-case basis.
Assessment factor approach
The use of mathematical models for extrapolation below the experimental dose range is in effect equivalent to a safety factor approach with a large and ill-defined uncertainty factor. The simplest alternative would be to apply an assessment factor to the apparent “no effect level”, or the “lowest level tested”. The level used for this assessment factor should be determined on a case-by-case basis considering the nature of the chemical and the population being exposed.
Benchmark dose (BMD)
The basis of this approach is a mathematical model fitted to the experimental data within the observable range to estimate or interpolate a dose corresponding to a defined level of effect, such as one, five or ten per cent increase in tumour incidence (ED01, ED05, ED10). As a ten per cent increase is about the smallest change that statistically can be determined in a standard bioassay, the ED10 is appropriate for cancer data. Using a BMD that is within the observable range of the experiment avoids the problems associated with dose extrapolation. Estimates of the BMD or its lower confidence limit reflect the doses at which changes in tumour incidence occurred, but are quite insensitive to the mathematical model used. A benchmark dose can be used in risk assessment as a measure of tumour potency and combined with appropriate assessment factors to set acceptable levels for human exposure.
Threshold of regulation
Krewski et al. (1990) have reviewed the concept of a “threshold of regulation” for chemical carcinogens. Based on data obtained from the carcinogen potency database (CPDB) for 585 experiments, the dose corresponding to 10-6 risk was roughly log-normally distributed around a median of 70 to 90 ng/kg/d. Exposure to dose levels greater than this range would be considered unacceptable. The dose was estimated by linear extrapolation from the TD50 (the dose inducing toxicity is 50% of the animals tested) and was within a factor of five to ten of the figure obtained from the linearized multistage model. Unfortunately, the TD50 values will be related to the MTD, which again casts doubt on the validity of the measurement. However the TD50 will often be within or very close to the experimental data range.
Such an approach as using a threshold of regulation would require much more consideration of biological, analytical and mathematical issues and a much wider database before it could be considered. Further investigation into the potencies of various carcinogens may throw further light onto this area.
Objectives and Future of CarcinogenRisk Assessment
Looking back to the original expectations on the regulation of (environmental) carcinogens, namely to achieve a major reduction in cancer, it appears that the results at present are disappointing. Over the years it became apparent that the number of cancer cases estimated to be produced by regulatable carcinogens was disconcertingly small. Considering the high expectations that launched the regulatory efforts in the 1970s, a major anticipated reduction in the cancer death rate has not been achieved in terms of the estimated effects of environmental carcinogens, not even with ultraconservative quantitative assessment procedures. The main characteristic of the EPA procedures is that low-dose extrapolations are made in the same way for each chemical regardless of the mechanism of tumour formation in experimental studies. It should be noted, however, that this approach stands in sharp contrast to approaches taken by other governmental agencies. As indicated above, the EU and several European governments—Denmark, France, Germany, Italy, the Netherlands, Sweden, Switzerland, UK—distinguish between genotoxic and non-genotoxic carcinogens, and approach risk estimation differently for the two categories. In general, non-genotoxic carcinogens are treated as threshold toxicants. No effect levels are determined, and uncertainty factors are used to provide an ample margin of safety. To determine whether or not a chemical should be regarded as non-genotoxic is a matter of scientific debate and requires clear expert judgement.
The fundamental issue is: What is the cause of cancer in humans and what is the role of environmental carcinogens in that causation? The hereditary aspects of cancer in humans are much more important than previously anticipated. The key to signifi- cant advancement in the risk assessment of carcinogens is a better understanding of the causes and mechanisms of cancer. The field of cancer research is entering a very exciting area. Molecular research may radically alter the way we view the impact of environmental carcinogens and the approaches to control and prevent cancer, both for the general public and the workplace. Risk assessment of carcinogens needs to be based on concepts of the mechanisms of action that are, in fact, just emerging. One of the important aspects is the mechanism of heritable cancer and the interaction of carcinogens with this process. This knowledge will have to be incorporated into the systematic and consistent methodology that already exists for the risk assessment of carcinogens.
Group 1—Carcinogenic to Humans (74)
Agents and groups of agents
Aflatoxins [1402-68-2] (1993)
Arsenic [7440-38-2] and arsenic compounds2
Beryllium [7440-41-7] and beryllium compounds (1993)3
Bis(chloromethyl)ether [542-88-1] and chloromethyl methyl ether [107-30-2] (technical-grade)
1,4-Butanediol dimethanesulphonate (Myleran) [55-98-1]
Cadmium [7440-43-9] and cadmium compounds (1993)3
1-(2-Chloroethyl)-3-(4-methylcyclohexyl)-1-nitrosourea (Methyl-CCNU; Semustine) [13909-09-6]
Chromium[VI] compounds (1990)3
Ciclosporin [79217-60-0] (1990)
Cyclophosphamide [50-18-0] [6055-19-2]
Ethylene oxide4 [75-21-8] (1994)
Helicobacter pylori (infection with) (1994)
Hepatitis B virus (chronic infection with) (1993)
Hepatitis C virus (chronic infection with) (1993)
Human papillomavirus type 16 (1995)
Human papillomavirus type 18 (1995)
Human T-cell lymphotropic virus type I (1996)
8-Methoxypsoralen (Methoxsalen) [298-81-7] plus ultraviolet A radiation
MOPP and other combined chemotherapy including alkylating agents
Mustard gas (Sulphur mustard) [505-60-2]
Nickel compounds (1990)3
Oestrogen replacement therapy
Opisthorchis viverrini (infection with) (1994)
Oral contraceptives, combined5
Oral contraceptives, sequential
Radon [10043-92-2] and its decay products (1988)
Schistosoma haematobium (infection with) (1994)
Silica [14808-60-7] crystalline (inhaled in the form of quartz or cristobalite from occupational sources)
Solar radiation (1992)
Talc containing asbestiform fibres
Thiotepa [52-24-4] (1990)
Vinyl chloride [75-01-4]
Alcoholic beverages (1988)
Analgesic mixtures containing phenacetin
Betel quid with tobacco
Coal-tar pitches [65996-93-2]
Mineral oils, untreated and mildly treated
Salted fish (Chinese-style) (1993)
Shale oils [68308-34-9]
Tobacco products, smokeless
Auramine, manufacture of
Boot and shoe manufacture and repair
Furniture and cabinet making
Haematite mining (underground) with exposure to radon
Iron and steel founding
Isopropanol manufacture (strong-acid process)
Magenta, manufacture of (1993)
Painter (occupational exposure as a) (1989)
Strong-inorganic-acid mists containing sulphuric acid (occupational exposure to) (1992)
Group 2A—Probably carcinogenic to humans (56)
Agents and groups of agents
Acrylamide [79-06-1] (1994)8
Androgenic (anabolic) steroids
Azacitidine8 [320-67-2] (1990)
Bischloroethyl nitrosourea (BCNU) [154-93-8]
1,3-Butadiene [106-99-0] (1992)
Captafol [2425-06-1] (1991)
Chloramphenicol [56-75-7] (1990)
p-Chloro-o-toluidine [95-69-2] and its strong acid salts (1990)3
Chlorozotocin8 [54749-90-5] (1990)
Clonorchis sinensis (infection with)8 (1994)
Diethyl sulphate [64-67-5] (1992)
Dimethylcarbamoyl chloride8 [79-44-7]
Dimethyl sulphate8 [77-78-1]
Ethylene dibromide8 [106-93-4]
IQ8 (2-Amino-3-methylimidazo[4,5-f]quinoline) [76180-96-6] (1993)
4,4´-Methylene bis(2-chloroaniline) (MOCA)8 [101-14-4] (1993)
N-Methyl-N´-nitro-N-nitrosoguanidine8 (MNNG) [70-25-7]
Nitrogen mustard [51-75-2]
N-Nitrosodimethylamine 8 [62-75-9]
Procarbazine hydrochloride8 [366-70-1]
Styrene-7,8-oxide8 [96-09-3] (1994)
Ultraviolet radiation A8 (1992)
Ultraviolet radiation B8 (1992)
Ultraviolet radiation C8 (1992)
Vinyl bromide6 [593-60-2]
Vinyl fluoride [75-02-5]
Diesel engine exhaust (1989)
Hot mate (1991)
Non-arsenical insecticides (occupational exposures in spraying and application of) (1991)
Polychlorinated biphenyls [1336-36-3]
Art glass, glass containers and pressed ware (manufacture of) (1993)
Hairdresser or barber (occupational exposure as a) (1993)
Petroleum refining (occupational exposures in) (1989)
Sunlamps and sunbeds (use of) (1992)
Group 2B—Possibly carcinogenic to humans (225)
Agents and groups of agents
A–α–C (2-Amino-9H-pyrido[2,3-b]indole) [26148-68-5]
AF-2 [2-(2-Furyl)-3-(5-nitro-2-furyl)acrylamide] [3688-53-7]
Aflatoxin M1 [6795-23-9] (1993)
Antimony trioxide [1309-64-4] (1989)
Atrazine9 [1912-24-9] (1991)
Auramine [492-80-8] (technical-grade)
Benzyl violet 4B [1694-09-3]
Bromodichloromethane [75-27-4] (1991)
Butylated hydroxyanisole (BHA) [25013-16-5]
Caffeic acid [331-39-5] (1993)
Carbon tetrachloride [56-23-5]
Chlordane [57-74-9] (1991)
Chlordecone (Kepone) [143-50-0]
Chlorendic acid [115-28-6] (1990)
α-Chlorinated toluenes (benzyl chloride, benzal chloride,benzotrichloride)
p-Chloroaniline [106-47-8] (1993)
CI Acid Red 114 [6459-94-5] (1993)
CI Basic Red 9 [569-61-9] (1993)
CI Direct Blue 15 [2429-74-5] (1993)
Citrus Red No. 2 [6358-53-8]
Cobalt [7440-48-4] and cobalt compounds3 (1991)
Dantron (Chrysazin; 1,8-Dihydroxyanthraquinone) [117-10-2] (1990)
DDT´-DDT, 50-29-3] (1991)
4,4´-Diaminodiphenyl ether [101-80-4]
3,3´-Dichloro-4,4´-diaminodiphenyl ether [28434-86-8]
Dichloromethane (methylene chloride) [75-09-2]
1,3-Dichloropropene [542-75-6] (technical grade)
Dichlorvos [62-73-7] (1991)
Diglycidyl resorcinol ether [101-90-6]
Diisopropyl sulphate [2973-10-6] (1992)
3,3´-Dimethoxybenzidine (o-Dianisidine) [119-90-4]
2,6-Dimethylaniline (2,6-xylidine) [87-62-7] (1993)
3,3´-Dimethylbenzidine (o-tolidine) [119-93-7]
Dimethylformamide [68-12-2] (1989)
1,6-Dinitropyrene [42397-64-8] (1989)
1,8-Dinitropyrene [42397-65-9] (1989)
Disperse Blue 1 [2475-45-8] (1990)
Ethyl acrylate [140-88-5]
Ethylene thiourea [96-45-7]
Ethyl methanesulphonate [62-50-0]
Glass wool (1988)
Glu-P-2 (2-aminodipyrido[1,2-a:3´,2´-d]imidazole) [67730-10-3]
HC Blue No. 1 [2784-94-3] (1993)
Heptachlor [76-44-8] (1991)
Human immunodeficiency virus type 2 (infection with) (1996)
Human papillomaviruses: some types other than 16, 18, 31 and 33 (1995)
Iron-dextran complex [9004-66-4]
Isoprene [78-79-5] (1994)
Lead [7439-92-1] and lead compounds, inorganic3
Magenta [632-99-5] (containing CI Basic Red 9) (1993)
Medroxyprogesterone acetate [71-58-9]
MeIQ (2-Amino-3,4-dimethylimidazo[4,5-f]quinoline)[77094-11-2] (1993)
MeIQx (2-Amino-3,8-dimethylimidazo[4,5-f]quinoxaline) [77500-04-0] (1993)
2-Methylaziridine (propyleneimine) [75-55-8]
Methylazoxymethanol acetate [592-62-1]
4,4´-Methylene bis(2-methylaniline) [838-88-0]
Methylmercury compounds (1993)3
Methyl methanesulphonate [66-27-3]
2-Methyl-1-nitroanthraquinone [129-15-7] (uncertain purity)
Mitomycin C [50-07-7]
Nickel, metallic [7440-02-0] (1990)
Nitrilotriacetic acid [139-13-9] and its salts (1990)3
2-Nitroanisole [91-23-6] (1996)
Nitrobenzene [98-95-3] (1996)
6-Nitrochrysene [7496-02-8] (1989)
Nitrofen [1836-75-5], technical-grade
2-Nitrofluorene [607-57-8] (1989)
Nitrogen mustard N-oxide [126-85-2]
1-Nitropyrene [5522-43-0] (1989)
4-Nitropyrene [57835-92-4] (1989)
4-(N-Nitrosomethylamino)-1-(3-pyridyl)-1-butanone (NNK) [64091-91-4]
Ochratoxin A [303-47-9] (1993)
Oil Orange SS [2646-17-5]
Oxazepam [604-75-1] (1996)
Palygorskite (attapulgite) [12174-11-7] (long fibres, >>5 micro-meters) (1997)
Panfuran S (containing dihydroxymethylfuratrizine [794-93-4])
Pentachlorophenol [87-86-5] (1991)
Phenazopyridine hydrochloride [136-40-3]
Phenoxybenzamine hydrochloride [63-92-3]
Phenyl glycidyl ether [122-60-1] (1989)
PhIP (2-Amino-1-methyl-6-phenylimidazo[4,5-b]pyridine) [105650-23-5] (1993)
Ponceau MX [3761-53-3]
Ponceau 3R [3564-09-8]
Potassium bromate [7758-01-2]
1,3-Propane sultone [1120-71-4]
Propylene oxide [75-56-9] (1994)
Schistosoma japonicum (infection with) (1994)
Sodium o-phenylphenate [132-27-4]
Styrene [100-42-5] (1994)
Tetranitromethane [509-14-8] (1996)
Toluene diisocyanates [26471-62-5]
Trichlormethine (Trimustine hydrochloride) [817-09-4] (1990)
Trp-P-1 (3-Amino-1,4-dimethyl-5H-pyrido[4,3-b]indole) [62450-06-0]
Trp-P-2 (3-Amino-1-methyl-5H-pyrido[4,3-b]indole) [62450-07-1]
Trypan blue [72-57-1]
Uracil mustard [66-75-1]
Vinyl acetate [108-05-4] (1995)
4-Vinylcyclohexene [100-40-3] (1994)
4-Vinylcyclohexene diepoxide [107-87-6] (1994)
Bitumens [8052-42-4], extracts of steam-refined and air-refined
Carrageenan [9000-07-1], degraded
Chlorinated paraffins of average carbon chain length C12 and average degree of chlorination approximately 60% (1990)
Coffee (urinary bladder)9 (1991)
Diesel fuel, marine (1989)
Engine exhaust, gasoline (1989)
Fuel oils, residual (heavy) (1989)
Pickled vegetables (traditional in Asia) (1993)
Polybrominated biphenyls [Firemaster BP-6, 59536-65-1]
Toxaphene (Polychlorinated camphenes) [8001-35-2]
Toxins derived from Fusarium moniliforme (1993)
Welding fumes (1990)
Carpentry and joinery
Dry cleaning (occupational exposures in) (1995)
Printing processes (occupational exposures in) (1996)
Textile manufacturing industry (work in) (1990)
Group 3—Unclassifiable as to carcinogenicity to humans (480)
Agents and groups of agents
Acridine orange [494-38-2]
Acriflavinium chloride [8018-07-3]
Acrylic acid [79-10-7]
Actinomycin D [50-76-0]
Aldicarb [116-06-3] (1991)
Allyl chloride [107-05-1]
Allyl isothiocyanate [57-06-7]
Allyl isovalerate [2835-39-4]
p-Aminobenzoic acid [150-13-0]
2-Amino-4-nitrophenol [99-57-0] (1993)
2-Amino-5-nitrophenol [121-88-0] (1993)
11-Aminoundecanoic acid [2432-99-7]
Ampicillin [69-53-4] (1990)
Angelicin [523-50-2] plus ultraviolet A radiation
Anthranilic acid [118-92-3]
Antimony trisulphide [1345-04-6] (1989)
p-Aramid fibrils [24938-64-5] (1997)
Aziridyl benzoquinone [800-24-8]
p-Benzoquinone dioxime [105-11-3]
Benzoyl chloride [98-88-4]
Benzoyl peroxide [94-36-0]
Benzyl acetate [140-11-4]
Bis(1-aziridinyl)morpholinophosphine sulphide [2168-68-5]
Bis(2,3-epoxycyclopentyl)ether [2386-90-5] (1989)
Bisphenol A diglycidyl ether [1675-54-3] (1989)
Blue VRS [129-17-9]
Brilliant Blue FCF, disodium salt [3844-45-9]
Bromochloroacetonitrile [83463-62-1] (1991)
Bromoethane [74-96-4] (1991)
Bromoform [75-25-2] (1991)
n-Butyl acrylate [141-32-2]
Butylated hydroxytoluene (BHT) [128-37-0]
Butyl benzyl phthalate [85-68-7]
Caffeine [58-08-2] (1991)
Carrageenan [9000-07-1], native
Chloral [75-87-6] (1995)
Chloral hydrate [302-17-0] (1995)
Chlorinated dibenzodioxins (other than TCDD)
Chlorinated drinking-water (1991)
Chloroacetonitrile [107-14-2] (1991)
Chlorodibromomethane [124-48-1] (1991)
Chloroethane [75-00-3] (1991)
3-Chloro-2-methylpropene [563-47-3] (1995)
Chloronitrobenzenes [88-73-3; 121-73-3; 100-00-5] (1996)
Chromium[III] compounds (1990)
Chromium [7440-47-3], metallic (1990)
CI Acid Orange 3 [6373-74-6] (1993)
Cimetidine [51481-61-9] (1990)
Cinnamyl anthranilate [87-29-6]
CI Pigment Red 3 [2425-85-6] (1993)
Clomiphene citrate [50-41-9]
Coal dust (1997)
Copper 8-hydroxyquinoline [10380-28-6]
Crotonaldehyde [4170-30-3] (1995)
Cyclamates [sodium cyclamate, 139-05-9]
Cyclohexanone [108-94-1] (1989)
D & C Red No. 9 [5160-02-1] (1993)
Decabromodiphenyl oxide [1163-19-5] (1990)
Deltamethrin [52918-63-5] (1991)
1,4-Diamino-2-nitrobenzene [5307-14-2] (1993)
Dibromoacetonitrile [3252-43-5] (1991)
Dichloroacetic acid [79-43-6] (1995)
Dichloroacetonitrile [3018-12-0] (1991)
p-Dimethylaminoazobenzenediazo sodium sulphonate[140-56-7]
4,4´-Dimethylangelicin [22975-76-4] plus ultraviolet Aradiation
4,5´-Dimethylangelicin [4063-41-6] plus ultraviolet A
N,N-Dimethylaniline [121-69-7] (1993)
Dimethyl hydrogen phosphite [868-85-9] (1990)
1,3-Dinitropyrene [75321-20-9] (1989)
Disperse Yellow 3 [2832-40-8] (1990)
Doxefazepam [40762-15-0] (1996)
Droloxifene [82413-20-5] (1996)
1,2-Epoxybutane [106-88-7] (1989)
3,4-Epoxy-6-methylcyclohexylmethyl-3,4-epoxy-6-methylcyclohexane carboxylate [141-37-7]
cis-9,10-Epoxystearic acid [2443-39-2]
Estazolam [29975-16-4] (1996)
Ethylene [74-85-1] (1994)
Ethylene sulphide [420-12-2]
2-Ethylhexyl acrylate [103-11-7] (1994)
Ethyl selenac [5456-28-0]
Ethyl tellurac [20941-65-5]
Evans blue [314-13-6]
Fast Green FCF [2353-45-9]
Fenvalerate [51630-58-1] (1991)
Ferric oxide [1309-37-1]
Fluorescent lighting (1992)
Fluorides (inorganic, used in drinking-water)
Furfural [98-01-1] (1995)
Furosemide (Frusemide) [54-31-9] (1990)
Gemfibrozil [25812-30-0] (1996)
Glass filaments (1988)
Glycidyl oleate [5431-33-4]
Glycidyl stearate [7460-84-6]
Guinea Green B [4680-78-8]
HC Blue No. 2 [33229-34-4] (1993)
HC Red No. 3 [2871-01-4] (1993)
HC Yellow No. 4 [59820-43-8] (1993)
Hepatitis D virus (1993)
Human T-cell lymphotropic virus type II (1996)
Hycanthone mesylate [23255-93-8]
Hydrochloric acid [7647-01-0] (1992)
Hydrochlorothiazide [58-93-5] (1990)
Hydrogen peroxide [7722-84-1]
Hypochlorite salts (1991)
Iron-dextrin complex [9004-51-7]
Iron sorbitol-citric acid complex [1338-16-5]
Isonicotinic acid hydrazide (Isoniazid) [54-85-3]
Lauroyl peroxide [105-74-8]
Lead, organo [75-74-1], [78-00-2]
Light Green SF [5141-20-8]
d-Limonene [5989-27-5] (1993)
Maleic hydrazide [123-33-1]
Mannomustine dihydrochloride [551-74-6]
Mercury [7439-97-6] and inorganic mercury compounds (1993)
Methyl acrylate [96-33-3]
5-Methylangelicin [73459-03-7] plus ultraviolet A radiation
Methyl bromide [74-83-9]
Methyl carbamate [598-55-0]
Methyl chloride [74-87-3]
4,4´-Methylenediphenyl diisocyanate [101-68-8]
Methylglyoxal [78-98-8] (1991)
Methyl iodide [74-88-4]
Methyl methacrylate [80-62-6] (1994)
N-Methylolacrylamide [90456-67-0] (1994)
Methyl parathion [298-00-0]
Methyl red [493-52-7]
Methyl selenac [144-34-3]
Monuron [150-68-5] (1991)
Morpholine [110-91-8] (1989)
Musk ambrette [83-66-9] (1996)
Musk xylene [81-15-2] (1996)
1,5-Naphthalene diisocyanate [3173-72-6]
1-Naphthylthiourea (ANTU) [86-88-4]
7-Nitrobenz[a]anthracene [20268-51-3] (1989
6-Nitrobenzo[a]pyrene [63041-90-7] (1989)
Nitrofural (Nitrofurazone) [59-87-0] (1990)
Nitrofurantoin [67-20-9] (1990)
1-Nitronaphthalene [86-57-7] (1989)
2-Nitronaphthalene [581-89-5] (1989)
3-Nitroperylene [20589-63-3] (1989)
2-Nitropyrene [789-07-1] (1989)
N-Nitrosofolic acid [29291-35-8]
4-(N-Nitrosomethylamino)-4-(3-pyridyl)-1-butanal (NNA) [64091-90-3]
5-Nitro-o-toluidine [99-55-8] (1990)
Nylon 6 [25038-54-4]
Oestradiol mustard [22966-79-6]
Oestrogen-progestin replacement therapy
Opisthorchis felineus (infection with) (1994)
Orange I [523-44-4]
Orange G [1936-15-8]
Palygorskite (attapulgite) [12174-11-7] (short fibres, <<5 micro-meters) (1997)
Paracetamol (Acetaminophen) [103-90-2] (1990)
Parasorbic acid [10048-32-5]
Penicillic acid [90-65-3]
Permethrin [52645-53-1] (1991)
Phenelzine sulphate [156-51-4]
Phenol [108-95-2] (1989)
Picloram [1918-02-1] (1991)
Piperonyl butoxide [51-03-6]
Polyacrylic acid [9003-01-4]
Polychlorinated dibenzo-p-dioxins (other than 2,3,7,8-tetra-chlorodibenzo-p-dioxin) (1997)
Polychlorinated dibenzofurans (1997)
Polymethylene polyphenyl isocyanate [9016-87-9]
Polymethyl methacrylate [9011-14-7]
Polyurethane foams [9009-54-5]
Polyvinyl acetate [9003-20-7]
Polyvinyl alcohol [9002-89-5]
Polyvinyl chloride [9002-86-2]
Polyvinyl pyrrolidone [9003-39-8]
Ponceau SX [4548-53-2]
Prazepam [2955-38-6] (1996)
Prednimustine [29069-24-7] (1990)
Pronetalol hydrochloride [51-02-5]
n-Propyl carbamate [627-12-3]
Propylene [115-07-1] (1994)
Quintozene (Pentachloronitrobenzene) [82-68-8]
Rhodamine B [81-88-9]
Rhodamine 6G [989-38-8]
Ripazepam [26308-28-1] (1996)
Saccharated iron oxide [8047-67-4]
Scarlet Red [85-83-6]
Schistosoma mansoni (infection with) (1994)
Selenium [7782-49-2] and selenium compounds
Semicarbazide hydrochloride [563-41-7]
Shikimic acid [138-59-0]
Silica [7631-86-9], amorphous
Simazine [122-34-9] (1991)
Sodium chlorite [7758-19-2] (1991)
Sodium diethyldithiocarbamate [148-18-5]
Styrene-acrylonitrile copolymers [9003-54-7]
Styrene-butadiene copolymers [9003-55-8]
Succinic anhydride [108-30-5]
Sudan I [842-07-9]
Sudan II [3118-97-6]
Sudan III [85-86-9]
Sudan Brown RR [6416-57-5]
Sudan Red 7B [6368-72-5]
Sulphafurazole (Sulphisoxazole) [127-69-5]
Sulphur dioxide [7446-09-5] (1992)
Sunset Yellow FCF [2783-94-0]
Talc [14807-96-6], not containing asbestiform fibres
Tannic acid [1401-55-4] and tannins
Temazepam [846-50-4] (1996)
Tetrakis(hydroxymethyl)phosphonium salts (1990)
Theobromine [83-67-0] (1991)
Theophylline [58-55-9] (1991)
Thiram [137-26-8] (1991)
Titanium dioxide [13463-67-7] (1989)
Toluene [108-88-3] (1989)
Toremifene [89778-26-7] (1996)
Toxins derived from Fusarium graminearum, F. culmorum andF. crookwellense (1993)
Toxins derived from Fusarium sporotrichioides (1993)
Trichloroacetic acid [76-03-9] (1995)
Trichloroacetonitrile [545-06-2] (1991)
1,1,2-Trichloroethane [79-00-5] (1991)
Triethylene glycol diglydicyl ether [1954-28-5]
Trifluralin [1582-09-8] (1991)
4,4´,6-Trimethylangelicin [90370-29-9] plus ultravioletradiation
2,4,6-Trinitrotoluene [118-96-7] (1996)
Tris(aziridinyl)-p-benzoquinone (Triaziquone) [68-76-8]
Tris(1-aziridinyl)phosphine oxide [545-55-1]
Tris(2-chloroethyl)phosphate [115-96-8] (1990)
Tris(2-methyl-1-aziridinyl)phosphine oxide [57-39-6]
Vat Yellow 4 [128-66-5] (1990)
Vinblastine sulphate [143-67-9]
Vincristine sulphate [2068-78-2]
Vinyl acetate [108-05-4]
Vinyl chloride-vinyl acetate copolymers [9003-22-9]
Vinylidene chloride [75-35-4]
Vinylidene chloride-vinyl chloride copolymers [9011-06-7]
Vinylidene fluoride [75-38-7]
Vinyl toluene [25013-15-4] (1994)
Xylene [1330-20-7] (1989)
Yellow AB [85-84-7]
Yellow OB [131-79-3]
Zeolites [1318-02-1] other than erionite (clinoptilolite,phillipsite, mordenite, non-fibrous Japanese zeolite,synthetic zeolites) (1997)
Ziram [137-30-4] (1991)
Betel quid, without tobacco
Bitumens [8052-42-4], steam-refined, cracking-residue and air-refined
Crude oil [8002-05-9] (1989)
Diesel fuels, distillate (light) (1989)
Fuel oils, distillate (light) (1989)
Jet fuel (1989)
Mineral oils, highly refined
Petroleum solvents (1989)
Printing inks (1996)
Terpene polychlorinates (StrobaneR) [8001-50-1]
Flat-glass and specialty glass (manufacture of) (1993)
Hair colouring products (personal use of) (1993)
Leather goods manufacture
Leather tanning and processing
Lumber and sawmill industries (including logging)
Paint manufacture (occupational exposure in) (1989)
Pulp and paper manufacture
Group 4—Probably not carcinogenic to humans (1)
Neurotoxicity and reproductive toxicity are important areas for risk assessment, since the nervous and reproductive systems are highly sensitive to xenobiotic effects. Many agents have been identified as toxic to these systems in humans (Barlow and Sullivan 1982; OTA 1990). Many pesticides are deliberately designed to disrupt reproduction and neurological function in target organisms, such as insects, through interference with hormonal biochemistry and neurotransmission.
It is difficult to identify substances potentially toxic to these systems for three interrelated reasons: first, these are among the most complex biological systems in humans, and animal models of reproductive and neurological function are generally acknowledged to be inadequate for representing such critical events as cognition or early embryofoetal development; second, there are no simple tests for identifying potential reproductive or neurological toxicants; and third, these systems contain multiple cell types and organs, such that no single set of mechanisms of toxicity can be used to infer dose-response relationships or predict structure-activity relationships (SAR). Moreover, it is known that the sensitivity of both the nervous and reproductive systems varies with age, and that exposures at critical periods may have much more severe effects than at other times.
Neurotoxicity Risk Assessment
Neurotoxicity is an important public health problem. As shown in table 1, there have been several episodes of human neurotoxicity involving thousands of workers and other populations exposed through industrial releases, contaminated food, water and other vectors. Occupational exposures to neurotoxins such as lead, mercury, organophosphate insecticides and chlorinated solvents are widespread throughout the world (OTA 1990; Johnson 1978).
Table 1. Selected major neurotoxicity incidents
|400 BC||Rome||Lead||Hippocrates recognizes lead toxicity in the mining industry.|
|1930s||United States (Southeast)||TOCP||Compound often added to lubricating oils contaminates “Ginger Jake,” an alcoholic beverage; more than 5,000 paralyzed, 20,000 to 100,000 affected.|
|1930s||Europe||Apiol (with TOCP)||Abortion-inducing drug containing TOCP causes 60 cases of neuropathy.|
|1932||United States (California)||Thallium||Barley laced with thallium sulphate, used as rodenticide, is stolen and used to make tortillas; 13 family members hospitalized with neurological symptoms; 6 deaths.|
|1937||South Africa||TOCP||60 South Africans develop paralysis after using contaminated cooking oil.|
|1946||—||Tetraethyl lead||More than 25 individuals suffer neurological effects after cleaning gasoline tanks.|
|1950s||Japan (Minimata)||Mercury||Hundreds ingest fish and shellfish contaminated with mercury from chemical plant; 121 poisoned, 46 deaths, many infants with serious nervous system damage.|
|1950s||France||Organotin||Contamination of Stallinon with triethyltin results in more than 100 deaths.|
|1950s||Morocco||Manganese||150 ore miners suffer chronic manganese intoxication involving severe neurobehavioural problems.|
|1950s-1970s||United States||AETT||Component of fragrances found to be neurotoxic; withdrawn from market in 1978; human health effects unknown.|
|1956||—||Endrin||49 persons become ill after eating bakery foods prepared from flour contaminated with the insecticide endrin; convulsions result in some instances.|
|1956||Turkey||HCB||Hexachlorobenzene, a seed grain fungicide, leads to poisoning of 3,000 to 4,000; 10 per cent mortality rate.|
|1956-1977||Japan||Clioquinol||Drug used to treat travellers’ diarrhoea found to cause neuropathy; as many as 10,000 affected over two decades.|
|1959||Morocco||TOCP||Cooking oil contaminated with lubricating oil affects some 10,000 individuals.|
|1960||Iraq||Mercury||Mercury used as fungicide to treat seed grain used in bread; more than 1,000 people affected.|
|1964||Japan||Mercury||Methylmercury affects 646 people.|
|1968||Japan||PCBs||Polychlorinated biphenyls leaked into rice oil; 1,665 people affected.|
|1969||Japan||n-Hexane||93 cases of neuropathy occur following exposure to n-hexane, used to make vinyl sandals.|
|1971||United States||Hexachlorophene||After years of bathing infants in 3 per cent hexachlorophene, the disinfectant is found to be toxic to the nervous system and other systems.|
|1971||Iraq||Mercury||Mercury used as fungicide to treat seed grain is used in bread; more than 5,000 severe poisonings, 450 hospital deaths, effects on many infants exposedprenatally not documented.|
|1973||United States (Ohio)||MIBK||Fabric production plant employees exposed to solvent; more than 80 workers suffer neuropathy, 180 have less severe effects.|
|1974-1975||United States (Hopewell, VA)||Chlordecone (Kepone)||Chemical plant employees exposed to insecticide; more than 20 suffer severe neurologicalproblems, more than 40 have less severe problems.|
|1976||United States (Texas)||Leptophos (Phosvel)||At least 9 employees suffer severe neurological problems following exposure to insecticide during manufacturing process.|
|1977||United States (California)||Dichloropropene (Telone II)||24 individuals hospitalized after exposure to pesticide Telone following traffic accident.|
|1979-1980||United States (Lancaster, TX)||BHMH (Lucel-7)||Seven employees at plastic bathtub manufacturing plant experience serious neurologicalproblems following exposure to BHMH.|
|1980s||United States||MPTP||Impurity in synthesis of illicit drug found to cause symptoms identical to those of Parkinson’s disease.|
|1981||Spain||Contaminated toxic oil||20,000 persons poisoned by toxic substance in oil, resulting in more than 500 deaths; many suffer severe neuropathy.|
|1985||United States and Canada||Aldicarb||More than 1,000 individuals in California and other Western States and British Columbia experience neuromuscular and cardiac problems following ingestion of melons contaminated with the pesticide aldicarb.|
|1987||Canada||Domoic acid||Ingestion of mussels contaminated with domoic acid causes 129 illnesses and 2 deaths; symptoms include memory loss, disorientation and seizures.|
Source: OTA 1990.
Chemicals may affect the nervous system through actions at any of several cellular targets or biochemical processes within the central or peripheral nervous system. Toxic effects on other organs may also affect the nervous system, as in the example of hepatic encephalopathy. The manifestations of neurotoxicity include effects on learning (including memory, cognition and intellectual performance), somatosensory processes (including sensation and proprioreception), motor function (including balance, gait and fine movement control), affect (including personality status and emotionality) and autonomic function (nervous control of endocrine function and internal organ systems). The toxic effects of chemicals upon the nervous system often vary in sensitivity and expression with age: during development, the central nervous system may be especially susceptible to toxic insult because of the extended process of cellular differentiation, migration, and cell-to-cell contact that takes place in humans (OTA 1990). Moreover, cytotoxic damage to the nervous system may be irreversible because neurons are not replaced after embryogenesis. While the central nervous system (CNS) is somewhat protected from contact with absorbed compounds through a system of tightly joined cells (the blood-brain barrier, composed of capillary endothelial cells that line the vasculature of the brain), toxic chemicals can gain access to the CNS by three mechanisms: solvents and lipophilic compounds can pass through cell membranes; some compounds can attach to endogenous transporter proteins that serve to supply nutrients and biomolecules to the CNS; small proteins if inhaled can be directly taken up by the olfactory nerve and transported to the brain.
US regulatory authorities
Statutory authority for regulating substances for neurotoxicity is assigned to four agencies in the United States: the Food and Drug Administration (FDA), the Environmental Protection Agency (EPA), the Occupational Safety and Health Administration (OSHA), and the Consumer Product Safety Commission (CPSC). While OSHA generally regulates occupational exposures to neurotoxic (and other) chemicals, the EPA has authority to regulate occupational and nonoccupational exposures to pesticides under the Federal Insecticide, Fungicide and Rodenticide Act (FIFRA). EPA also regulates new chemicals prior to manufacture and marketing, which obligates the agency to consider both occupational and nonoccupational risks.
Agents that adversely affect the physiology, biochemistry, or structural integrity of the nervous system or nervous system function expressed behaviourally are defined as neurotoxic hazards (EPA 1993). The determination of inherent neurotoxicity is a difficult process, owing to the complexity of the nervous system and the multiple expressions of neurotoxicity. Some effects may be delayed in appearance, such as the delayed neurotoxicity of certain organophosphate insecticides. Caution and judgement are required in determining neurotoxic hazard, including consideration of the conditions of exposure, dose, duration and timing.
Hazard identification is usually based upon toxicological studies of intact organisms, in which behavioural, cognitive, motor and somatosensory function is assessed with a range of investigative tools including biochemistry, electrophysiology and morphology (Tilson and Cabe 1978; Spencer and Schaumberg 1980). The importance of careful observation of whole organism behaviour cannot be overemphasized. Hazard identification also requires evaluation of toxicity at different developmental stages, including early life (intrauterine and early neonatal) and senescence. In humans, the identification of neurotoxicity involves clinical evaluation using methods of neurological assessment of motor function, speech fluency, reflexes, sensory function, electrophysiology, neuropsychological testing, and in some cases advanced techniques of brain imaging and quantitative electroencephalography. WHO has developed and validated a neurobehavioural core test battery (NCTB), which contains probes of motor function, hand-eye coordination, reaction time, immediate memory, attention and mood. This battery has been validated internationally by a coordinated process (Johnson 1978).
Hazard identification using animals also depends upon careful observational methods. The US EPA has developed a functional observational battery as a first-tier test designed to detect and quantify major overt neurotoxic effects (Moser 1990). This approach is also incorporated in the OECD subchronic and chronic toxicity testing methods. A typical battery includes the following measures: posture; gait; mobility; general arousal and reactivity; presence or absence of tremor, convulsions, lacrimation, piloerection, salivation, excess urination or defecation, stereotypy, circling, or other bizarre behaviours. Elicited behaviours include response to handling, tail pinch, or clicks; balance, righting reflex, and hind limb grip strength. Some representative tests and agents identified with these tests are shown in table 2.
Table 2. Examples of specialized tests to measure neurotoxicity
|Weakness||Grip strength; swimming endurance; suspension from rod; discriminative motor function; hind limb splay||n-Hexane, Methylbutylketone, Carbaryl|
|Incoordination||Rotorod, gait measurements||3-Acetylpyridine, Ethanol|
|Tremor||Rating scale, spectral analysis||Chlordecone, Type I Pyrethroids, DDT|
|Myoclonia, spasms||Rating scale, spectral analysis||DDT, Type II Pyrethroids|
|Auditory||Discriminant conditioning, reflex modification||Toluene, Trimethyltin|
|Visual toxicity||Discriminant conditioning||Methyl mercury|
|Somatosensory toxicity||Discriminant conditioning||Acrylamide|
|Pain sensitivity||Discriminant conditioning (btration); functional observational battery||Parathion|
|Olfactory toxicity||Discriminant conditioning||3-Methylindole methylbromide|
|Habituation||Startle reflex||Diisopropylfluorophosphate (DFP)|
|Classical conditioning||Nictitating membrane, conditioned flavour aversion, passive avoidance, olfactory conditioning||Aluminium, Carbaryl, Trimethyltin, IDPN, Trimethyltin (neonatal)|
|Operant or instrumental conditioning||One-way avoidance, Two-way avoidance, Y-maze avoidance, Biol watermaze, Morris water maze, Radial arm maze, Delayed matching to sample, Repeated acquisition, Visual discrimination learning||Chlordecone, Lead (neonatal), Hypervitaminosis A, Styrene, DFP, Trimethyltin, DFP. Carbaryl, Lead|
Source: EPA 1993.
These tests may be followed by more complex assessments usually reserved for mechanistic studies rather than hazard identification. In vitro methods for neurotoxicity hazard identification are limited since they do not provide indications of effects on complex function, such as learning, but they may be very useful in defining target sites of toxicity and improving the precision of target site dose-response studies (see WHO 1986 and EPA 1993 for comprehensive discussions of principles and methods for identifying potential neurotoxicants).
The relationship between toxicity and dose may be based upon human data when available or upon animal tests, as described above. In the United States, an uncertainty or safety factor approach is generally used for neurotoxicants. This process involves determining a “no observed adverse effect level” (NOAEL) or “lowest observed adverse effect level” (LOAEL) and then dividing this number by uncertainty or safety factors (usually multiples of 10) to allow for such considerations as incompleteness of data, potentially higher sensitivity of humans and variability of human response due to age or other host factors. The resultant number is termed the reference dose (RfD) or reference concentration (RfC). The effect occurring at the lowest dose in the most sensitive animal species and gender is generally used to determine the LOAEL or NOAEL. Conversion of animal dose to human exposure is done by standard methods of cross-species dosimetry, taking into account differences in lifespan and exposure duration.
The use of the uncertainty factor approach assumes that there is a threshold, or dose below which no adverse effect is induced. Thresholds for specific neurotoxicants may be difficult to determine experimentally; they are based upon assumptions as to mechanism of action which may or may not hold for all neurotoxicants (Silbergeld 1990).
At this stage, information is evaluated on sources, routes, doses and durations of exposure to the neurotoxicant for human populations, subpopulations or even individuals. This information may be derived from monitoring of environmental media or human sampling, or from estimates based upon standard scenarios (such as workplace conditions and job descriptions) or models of environmental fate and dispersion (see EPA 1992 for general guidelines on exposure assessment methods). In some limited cases, biological markers may be used to validate exposure inferences and estimates; however, there are relatively few usable biomarkers of neurotoxicants.
The combination of hazard identification, dose-response and exposure assessment is used to develop the risk characterization. This process involves assumptions as to the extrapolation of high to low doses, extrapolation from animals to humans, and the appropriateness of threshold assumptions and use of uncertainty factors.
Reproductive Toxicology—Risk Assessment Methods
Reproductive hazards may affect multiple functional endpoints and cellular targets within humans, with consequences for the health of the affected individual and future generations. Reproductive hazards may affect the development of the reproductive system in males or females, reproductive behaviours, hormonal function, the hypothalamus and pituitary, gonads and germ cells, fertility, pregnancy and the duration of reproductive function (OTA 1985). In addition, mutagenic chemicals may also affect reproductive function by damaging the integrity of germ cells (Dixon 1985).
The nature and extent of adverse effects of chemical exposures upon reproductive function in human populations is largely unknown. Relatively little surveillance information is available on such endpoints as fertility of men or women, age of menopause in women, or sperm counts in men. However, both men and women are employed in industries where exposures to reproductive hazards may occur (OTA 1985).
This section does not recapitulate those elements common to both neurotoxicant and reproductive toxicant risk assessment, but focuses upon issues specific to reproductive toxicant risk assessment. As with neurotoxicants, authority to regulate chemicals for reproductive toxicity is placed by statute in the EPA, OSHA, the FDA and the CPSC. Of these agencies, only the EPA has a stated set of guidelines for reproductive toxicity risk assessment. In addition, the state of California has developed methods for reproductive toxicity risk assessment in response to a state law, Proposition 65 (Pease et al. 1991).
Reproductive toxicants, like neurotoxicants, may act by affecting any of a number of target organs or molecular sites of action. Their assessment has additional complexity because of the need to evaluate three distinct organisms separately and together—the male, the female and the offspring (Mattison and Thomford 1989). While an important endpoint of reproductive function is the generation of a healthy child, reproductive biology also plays a role in the health of developing and mature organisms regardless of their involvement in procreation. For instance, loss of ovulatory function through natural depletion or surgical removal of oocytes has substantial effects upon the health of women, involving changes in blood pressure, lipid metabolism and bone physiology. Changes in hormone biochemistry may affect susceptibility to cancer.
The identification of a reproductive hazard may be made on the basis of human or animal data. In general, data from humans are relatively sparse, owing to the need for careful surveillance to detect alterations in reproductive function, such as sperm count or quality, ovulatory frequency and cycle length, or age at puberty. Detecting reproductive hazards through collection of information on fertility rates or data on pregnancy outcome may be confounded by the intentional suppression of fertility exercised by many couples through family-planning measures. Careful monitoring of selected populations indicates that rates of reproductive failure (miscarriage) may be very high, when biomarkers of early pregnancy are assessed (Sweeney et al. 1988).
Testing protocols using experimental animals are widely used to identify reproductive toxicants. In most of these designs, as developed in the United States by the FDA and the EPA and internationally by the OECD test guidelines program, the effects of suspect agents are detected in terms of fertility after male and/or female exposure; observation of sexual behaviours related to mating; and histopathological examination of gonads and accessory sex glands, such as mammary glands (EPA 1994). Often reproductive toxicity studies involve continuous dosing of animals for one or more generations in order to detect effects on the integrated reproductive process as well as to study effects on specific organs of reproduction. Multigenerational studies are recommended because they permit detection of effects that may be induced by exposure during the development of the reproductive system in utero. A special test protocol, the Reproductive Assessment by Continuous Breeding (RACB), has been developed in the United States by the National Toxicology Program. This test provides data on changes in the temporal spacing of pregnancies (reflecting ovulatory function), as well as number and size of litters over the entire test period. When extended to the lifetime of the female, it can yield information on early reproductive failure. Sperm measures can be added to the RACB to detect changes in male reproductive function. A special test to detect pre- or postimplantation loss is the dominant lethal test, designed to detect mutagenic effects in male spermatogenesis.
In vitro tests have also been developed as screens for reproductive (and developmental) toxicity (Heindel and Chapin 1993). These tests are generally used to supplement in vivo test results by providing more information on target site and mechanism of observed effects.
Table 3 shows the three types of endpoints in reproductive toxicity assessment—couple-mediated, female-specific and male-specific. Couple-mediated endpoints include those detectable in multigenerational and single-organism studies. They generally include assessment of offspring as well. It should be noted that fertility measurement in rodents is generally insensitive, as compared to such measurement in humans, and that adverse effects on reproductive function may well occur at lower doses than those that significantly affect fertility (EPA 1994). Male-specific endpoints can include dominant lethality tests as well as histopathological evaluation of organs and sperm, measurement of hormones, and markers of sexual development. Sperm function can also be assessed by in vitro fertilization methods to detect germ cell properties of penetration and capacitation; these tests are valuable because they are directly comparable to in vitro assessments conducted in human fertility clinics, but they do not by themselves provide dose-response information. Female-specific endpoints include, in addition to organ histopathology and hormone measurements, assessment of the sequelae of reproduction, including lactation and offspring growth.
Table 3. Endpoints in reproductive toxicology
|Multigenerational studies||Other reproductive endpoints|
|Mating rate, time to mating (time to pregnancy1)
Litter size (total and live)
Number of live and dead offspring (foetal death rate1)
External malformations and variations1
Internal malformations and variations1
Postnatal structural and functional development1
Visual examination and histopathology
|Testes, epididymides, seminal vesicles, prostate, pituitary
Testes, epididymides, seminal vesicles, prostate, pituitary
Sperm number (count) and quality (morphology, motility)
Luteinizing hormone, follicle stimulating hormone, testosterone, oestrogen, prolactin
Testis descent1, preputial separation, sperm production1, ano-genital distance, normality of external genitalia1
Visual examination and histopathology
Oestrous (menstrual1) cycle normality
Ovary, uterus, vagina, pituitary
Ovary, uterus, vagina, pituitary, oviduct, mammary gland
Vaginal smear cytology
LH, FSH, oestrogen, progesterone, prolactin
Normality of external genitalia1, vaginal opening, vaginal smear cytology, onset of oestrus behaviour (menstruation1)
Vaginal smear cytology, ovarian histology
1 Endpoints that can be obtained relatively noninvasively with humans.
Source: EPA 1994.
In the United States, the hazard identification concludes with a qualitative evaluation of toxicity data by which chemicals are judged to have either sufficient or insufficient evidence of hazard (EPA 1994). “Sufficient” evidence includes epidemiological data providing convincing evidence of a causal relationship (or lack thereof), based upon case-control or cohort studies, or well-supported case series. Sufficient animal data may be coupled with limited human data to support a finding of a reproductive hazard: to be sufficient, the experimental studies are generally required to utilize EPA’s two-generation test guidelines, and must include a minimum of data demonstrating an adverse reproductive effect in an appropriate, well-conducted study in one test species. Limited human data may or may not be available; it is not necessary for the purposes of hazard identification. To rule out a potential reproductive hazard, the animal data must include an adequate array of endpoints from more than one study showing no adverse reproductive effect at doses minimally toxic to the animal (EPA 1994).
As with the evaluation of neurotoxicants, the demonstration of dose-related effects is an important part of risk assessment for reproductive toxicants. Two particular difficulties in dose-response analyses arise due to complicated toxicokinetics during pregnancy, and the importance of distinguishing specific reproductive toxicity from general toxicity to the organism. Debilitated animals, or animals with substantial nonspecific toxicity (such as weight loss) may fail to ovulate or mate. Maternal toxicity can affect the viability of pregnancy or support for lactation. These effects, while evidence of toxicity, are not specific to reproduction (Kimmel et al. 1986). Assessing dose response for a specific endpoint, such as fertility, must be done in the context of an overall assessment of reproduction and development. Dose-response relationships for different effects may differ significantly, but interfere with detection. For instance, agents that reduce litter size may result in no effects upon litter weight because of reduced competition for intrauterine nutrition.
An important component of exposure assessment for reproductive risk assessment relates to information on the timing and duration of exposures. Cumulative exposure measures may be insufficiently precise, depending upon the biological process that is affected. It is known that exposures at different developmental stages in males and females can result in different outcomes in both humans and experimental animals (Gray et al. 1988). The temporal nature of spermatogenesis and ovulation also affects outcome. Effects on spermatogenesis may be reversible if exposures cease; however, oocyte toxicity is not reversible since females have a fixed set of germ cells to draw upon for ovulation (Mattison and Thomford 1989).
As with neurotoxicants, the existence of a threshold is usually assumed for reproductive toxicants. However, the actions of mutagenic compounds on germ cells may be considered an exception to this general assumption. For other endpoints, an RfD or RfC is calculated as with neurotoxicants by determination of the NOAEL or LOAEL and application of appropriate uncertainty factors. The effect used for determining the NOAEL or LOAEL is the most sensitive adverse reproductive endpoint from the most appropriate or most sensitive mammalian species (EPA 1994). Uncertainty factors include consideration of interspecies and intraspecies variation, ability to define a true NOAEL, and sensitivity of the endpoint detected.
Risk characterizations should also be focused upon specific subpopulations at risk, possibly specifying males and females, pregnancy status, and age. Especially sensitive individuals, such as lactating women, women with reduced oocyte numbers or men with reduced sperm counts, and prepubertal adolescents may also be considered.
The identification of carcinogenic risks to humans has been the objective of the IARC Monographs on the Evaluation of Carcinogenic Risks to Humans since 1971. To date, 69 volumes of monographs have been published or are in press, with evaluations of carcinogenicity of 836 agents or exposure circumstances (see Appendix).
These qualitative evaluations of carcinogenic risk to humans are equivalent to the hazard identification phase in the now generally accepted scheme of risk assessment, which involves identification of hazard, dose-response assessment (including extrapolation outside the limits of observations), exposure assessment and risk characterization.
The aim of the IARC Monographs programme has been to publish critical qualitative evaluations on the carcinogenicity to humans of agents (chemicals, groups of chemicals, complex mixtures, physical or biological factors) or exposure circumstances (occupational exposures, cultural habits) through international cooperation in the form of expert working groups. The working groups prepare monographs on a series of individual agents or exposures and each volume is published and widely distributed. Each monograph consists of a brief description of the physical and chemical properties of the agent; methods for its analysis; a description of how it is produced, how much is produced, and how it is used; data on occurrence and human exposure; summaries of case reports and epidemiological studies of cancer in humans; summaries of experimental carcinogenicity tests; a brief description of other relevant biological data, such as toxicity and genetic effects, that may indicate its possible mechanism of action; and an evaluation of its carcinogenicity. The first part of this general scheme is adjusted appropriately when dealing with agents other than chemicals or chemical mixtures.
The guiding principles for evaluating carcinogens have been drawn up by various ad-hoc groups of experts and are laid down in the Preamble to the Monographs (IARC 1994a).
Tools for Qualitative Carcinogenic Risk (Hazard) Identification
Associations are established by examining the available data from studies of exposed humans, the results of bioassays in experimental animals and studies of exposure, metabolism, toxicity and genetic effects in both humans and animals.
Studies of cancer in humans
Three types of epidemiological studies contribute to an assessment of carcinogenicity: cohort studies, case-control studies and correlation (or ecological) studies. Case reports of cancer may also be reviewed.
Cohort and case-control studies relate individual exposures under study to the occurrence of cancer in individuals and provide an estimate of relative risk (ratio of the incidence in those exposed to the incidence in those not exposed) as the main measure of association.
In correlation studies, the unit of investigation is usually whole populations (e.g., particular geographical areas) and cancer frequency is related to a summary measure of the exposure of the population to the agent. Because individual exposure is not documented, a causal relationship is less easy to infer from such studies than from cohort and case-control studies. Case reports generally arise from a suspicion, based on clinical experience, that the concurrence of two events—that is, a particular exposure and occurrence of a cancer—has happened rather more frequently than would be expected by chance. The uncertainties surrounding interpretation of case reports and correlation studies make them inadequate, except in rare cases, to form the sole basis for inferring a causal relationship.
In the interpretation of epidemiological studies, it is necessary to take into account the possible roles of bias and confounding. By bias is meant the operation of factors in study design or execution that lead erroneously to a stronger or weaker association than in fact exists between disease and an agent. By confounding is meant a situation in which the relationship with disease is made to appear stronger or weaker than it truly is as a result of an association between the apparent causal factor and another factor that is associated with either an increase or decrease in the incidence of the disease.
In the assessment of the epidemiological studies, a strong association (i.e., a large relative risk) is more likely to indicate causality than a weak association, although it is recognized that relative risks of small magnitude do not imply lack of causality and may be important if the disease is common. Associations that are replicated in several studies of the same design or using different epidemiological approaches or under different circumstances of exposure are more likely to represent a causal relationship than isolated observations from single studies. An increase in risk of cancer with increasing amounts of exposure is considered to be a strong indication of causality, although the absence of a graded response is not necessarily evidence against a causal relationship. Demonstration of a decline in risk after cessation of or reduction in exposure in individuals or in whole populations also supports a causal interpretation of the findings.
When several epidemiological studies show little or no indication of an association between an exposure and cancer, the judgement may be made that, in the aggregate, they show evidence suggesting lack of carcinogenicity. The possibility that bias, confounding or misclassification of exposure or outcome could explain the observed results must be considered and excluded with reasonable certainty. Evidence suggesting lack of carcinogenicity obtained from several epidemiological studies can apply only to those type(s) of cancer, dose levels and intervals between first exposure and observation of disease that were studied. For some human cancers, the period between first exposure and the development of clinical disease is seldom less than 20 years; latent periods substantially shorter than 30 years cannot provide evidence suggesting lack of carcinogenicity.
The evidence relevant to carcinogenicity from studies in humans is classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between exposure to the agent, mixture or exposure circumstance and human cancer. That is, a positive relationship has been observed between the exposure and cancer in studies in which chance, bias and confounding could be ruled out with reasonable confidence.
Limited evidence of carcinogenicity. A positive association has been observed between exposure to the agent, mixture or exposure circumstance and cancer for which a causal interpretation is considered to be credible, but chance, bias or confounding cannot be ruled out with reasonable confidence.
Inadequate evidence of carcinogenicity. The available studies are of insufficient quality, consistency or statistical power to permit a conclusion regarding the presence or absence of a causal association, or no data on cancer in humans are available.
Evidence suggesting lack of carcinogenicity. There are several adequate studies covering the full range of levels of exposure that human beings are known to encounter, which are mutually consistent in not showing a positive association between exposure to the agent and the studied cancer at any observed level of exposure. A conclusion of “evidence suggesting lack of carcinogenicity” is inevitably limited to the cancer sites, conditions and levels of exposure and length of observation covered by the available studies.
The applicability of an evaluation of the carcinogenicity of a mixture, process, occupation or industry on the basis of evidence from epidemiological studies depends on time and place. The specific exposure, process or activity considered most likely to be responsible for any excess risk should be sought and the evaluation focused as narrowly as possible. The long latent period of human cancer complicates the interpretation of epidemiological studies. A further complication is the fact that humans are exposed simultaneously to a variety of chemicals, which can interact either to increase or decrease the risk for neoplasia.
Studies on carcinogenicity in experimental animals
Studies in which experimental animals (usually mice and rats) are exposed to potential carcinogens and examined for evidence of cancer were introduced about 50 years ago with the aim of introducing a scientific approach to the study of chemical carcinogenesis and to avoid some of the disadvantages of using only epidemiological data in humans. In the IARC Monographs all available, published studies of carcinogenicity in animals are summarized, and the degree of evidence of carcinogenicity is then classified into one of the following categories:
Sufficient evidence of carcinogenicity. A causal relationship has been established between the agent or mixture and an increased incidence of malignant neoplasms or of an appropriate combination of benign and malignant neoplasms in two or more species of animals or in two or more independent studies in one species carried out at different times or in different laboratories or under different protocols. Exceptionally, a single study in one species might be considered to provide sufficient evidence of carcinogenicity when malignant neoplasms occur to an unusual degree with regard to incidence, site, type of tumour or age at onset.
Limited evidence of carcinogenicity. The data suggest a carcinogenic effect but are limited for making a definitive evaluation because, for example, (a) the evidence of carcinogenicity is restricted to a single experiment; or (b) there are some unresolved questions regarding the adequacy of the design, conduct or interpretation of the study; or (c) the agent or mixture increases the incidence only of benign neoplasms or lesions of uncertain neoplastic potential, or of certain neoplasms which may occur spontaneously in high incidences in certain strains.
Inadequate evidence of carcinogenicity. The studies cannot be interpreted as showing either the presence or absence of a carcinogenic effect because of major qualitative or quantitative limitations, or no data on cancer in experimental animals are available.
Evidence suggesting lack of carcinogenicity. Adequate studies involving at least two species are available which show that, within the limits of the tests used, the agent or mixture is not carcinogenic. A conclusion of evidence suggesting lack of carcinogenicity is inevitably limited to the species, tumour sites and levels of exposure studied.
Other data relevant to an evaluationof carcinogenicity
Data on biological effects in humans that are of particular relevance include toxicological, kinetic and metabolic considerations and evidence of DNA binding, persistence of DNA lesions or genetic damage in exposed humans. Toxicological information, such as that on cytotoxicity and regeneration, receptor binding and hormonal and immunological effects, and data on kinetics and metabolism in experimental animals are summarized when considered relevant to the possible mechanism of the carcinogenic action of the agent. The results of tests for genetic and related effects are summarized for whole mammals including man, cultured mammalian cells and nonmammalian systems. Structure-activity relationships are mentioned when relevant.
For the agent, mixture or exposure circumstance being evaluated, the available data on end-points or other phenomena relevant to mechanisms of carcinogenesis from studies in humans, experimental animals and tissue and cell test systems are summarized within one or more of the following descriptive dimensions:
These dimensions are not mutually exclusive, and an agent may fall within more than one. Thus, for example, the action of an agent on the expression of relevant genes could be summarized under both the first and second dimension, even if it were known with reasonable certainty that those effects resulted from genotoxicity.
Finally, the body of evidence is considered as a whole, in order to reach an overall evaluation of the carcinogenicity to humans of an agent, mixture or circumstance of exposure. An evaluation may be made for a group of chemicals when supporting data indicate that other, related compounds for which there is no direct evidence of capacity to induce cancer in humans or in animals may also be carcinogenic, a statement describing the rationale for this conclusion is added to the evaluation narrative.
The agent, mixture or exposure circumstance is described according to the wording of one of the following categories, and the designated group is given. The categorization of an agent, mixture or exposure circumstance is a matter of scientific judgement, reflecting the strength of the evidence derived from studies in humans and in experimental animals and from other relevant data.
The agent (mixture) is carcinogenic to humans. The exposure circumstance entails exposures that are carcinogenic to humans.
This category is used when there is sufficient evidence of carcinogenicity in humans. Exceptionally, an agent (mixture) may be placed in this category when evidence in humans is less than sufficient but there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent (mixture) acts through a relevant mechanism of carcinogenicity.
This category includes agents, mixtures and exposure circumstances for which, at one extreme, the degree of evidence of carcinogenicity in humans is almost sufficient, as well as those for which, at the other extreme, there are no human data but for which there is evidence of carcinogenicity in experimental animals. Agents, mixtures and exposure circumstances are assigned to either group 2A (probably carcinogenic to humans) or group 2B (possibly carcinogenic to humans) on the basis of epidemiological and experimental evidence of carcinogenicity and other relevant data.
Group 2A. The agent (mixture) is probably carcinogenic to humans. The exposure circumstance entails exposures that are probably carcinogenic to humans. This category is used when there is limited evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals. In some cases, an agent (mixture) may be classified in this category when there is inadequate evidence of carcinogenicity in humans and sufficient evidence of carcinogenicity in experimental animals and strong evidence that the carcinogenesis is mediated by a mechanism that also operates in humans. Exceptionally, an agent, mixture or exposure circumstance may be classified in this category solely on the basis of limited evidence of carcinogenicity in humans.
Group 2B. The agent (mixture) is possibly carcinogenic to humans. The exposure circumstance entails exposures that are possibly carcinogenic to humans. This category is used for agents, mixtures and exposure circumstances for which there is limited evidence of carcinogenicity in humans and less than sufficient evidence of carcinogenicity in experimental animals. It may also be used when there is inadequate evidence of carcinogenicity in humans but there is sufficient evidence of carcinogenicity in experimental animals. In some instances, an agent, mixture or exposure circumstance for which there is inadequate evidence of carcinogenicity in humans but limited evidence of carcinogenicity in experimental animals together with supporting evidence from other relevant data may be placed in this group.
The agent (mixture or exposure circumstance) is not classifiable as to its carcinogenicity to humans. This category is used most commonly for agents, mixtures and exposure circumstances for which the evidence of carcinogenicity is inadequate in humans and inadequate or limited in experimental animals.
Exceptionally, agents (mixtures) for which the evidence of carcinogenicity is inadequate in humans but sufficient in experimental animals may be placed in this category when there is strong evidence that the mechanism of carcinogenicity in experimental animals does not operate in humans.
The agent (mixture) is probably not carcinogenic to humans. This category is used for agents or mixtures for which there is evidence suggesting lack of carcinogenicity in humans and in experimental animals. In some instances, agents or mixtures for which there is inadequate evidence of carcinogenicity in humans but evidence suggesting lack of carcinogenicity experimental animals, consistently and strongly supported by a broad range of other relevant data, may be classified in this group.
Classification systems made by humans are not sufficiently perfect to encompass all the complex entities of biology. They are, however, useful as guiding principles and may be modified as new knowledge of carcinogenesis becomes more firmly established. In the categorization of an agent, mixture or exposure circumstance, it is essential to rely on scientific judgements formulated by the group of experts.
Results to Date
To date, 69 volumes of IARC Monographs have been published or are in press, in which evaluations of carcinogenicity to humans have been made for 836 agents or exposure circumstances. Seventy-four agents or exposures have been evaluated as carcinogenic to humans (Group 1), 56 as probably carcinogenic to humans (Group 2A), 225 as possibly carcinogenic to humans (Group 2B) and one as probably not carcinogenic to humans (Group 4). For 480 agents or exposures, the available epidemiological and experimental data did not allow an evaluation of their carcinogenicity to humans (Group 3).
Importance of Mechanistic Data
The revised Preamble, which first appeared in volume 54 of the IARC Monographs, allows for the possibility that an agent for which epidemiological evidence of cancer is less than sufficient can be placed in Group 1 when there is sufficient evidence of carcinogenicity in experimental animals and strong evidence in exposed humans that the agent acts through a relevant mechanism of carcinogenicity. Conversely, an agent for which there is inadequate evidence of carcinogenicity in humans together with sufficient evidence in experimental animals and strong evidence that the mechanism of carcinogenesis does not operate in humans may be placed in Group 3 instead of the normally assigned Group 2B—possibly carcinogenic to humans—category.
The use of such data on mechanisms has been discussed on three recent occasions:
While it is generally accepted that solar radiation is carcinogenic to humans (Group 1), epidemiological studies on cancer in humans for UVA and UVB radiation from sun lamps provide only limited evidence of carcinogenicity. Special tandem base substitutions (GCTTT) have been observed in p53 tumour suppression genes in squamous-cell tumours at sun-exposed sites in humans. Although UVR can induce similar transitions in some experimental systems and UVB, UVA and UVC are carcinogenic in experimental animals, the available mechanistic data were not considered strong enough to allow the working group to classify UVB, UVA and UVC higher than Group 2A (IARC 1992). In a study published after the meeting (Kress et al. 1992), CCTTT transitions in p53 have been demonstrated in UVB-induced skin tumours in mice, which might suggest that UVB should also be classified as carcinogenic to humans (Group 1).
The second case in which the possibility of placing an agent in Group 1 in the absence of sufficient epidemiological evidence was considered was 4,4´-methylene-bis(2-chloroaniline) (MOCA). MOCA is carcinogenic in dogs and rodents and is comprehensively genotoxic. It binds to DNA through reaction with N-hydroxy MOCA and the same adducts that are formed in target tissues for carcinogenicity in animals have been found in urothelial cells from a small number of exposed humans. After lengthy discussions on the possibility of an upgrading, the working group finally made an overall evaluation of Group 2A, probably carcinogenic to humans (IARC 1993).
During a recent evaluation of ethylene oxide (IARC 1994b), the available epidemiological studies provided limited evidence of carcinogenicity in humans, and studies in experimental animals provided sufficient evidence of carcinogenicity. Taking into account the other relevant data that (1) ethylene oxide induces a sensitive, persistent, dose-related increase in the frequency of chromosomal aberrations and sister chromatid exchanges in peripheral lymphocytes and micronuclei in bone-marrow cells from exposed workers; (2) it has been associated with malignancies of the lymphatic and haematopoietic system in both humans and experimental animals; (3) it induces a dose-related increase in the frequency of haemoglobin adducts in exposed humans and dose-related increases in the numbers of adducts in both DNA and haemoglobin in exposed rodents; (4) it induces gene mutations and heritable translocations in germ cells of exposed rodents; and (5) it is a powerful mutagen and clastogen at all phylogenetic levels; ethylene oxide was classified as carcinogenic to humans (Group 1).
In the case where the Preamble allows for the possibility that an agent for which there is sufficient evidence of carcinogenicity in animals can be placed in Group 3 (instead of Group 2B, in which it would normally be categorized) when there is strong evidence that the mechanism of carcinogenicity in animals does not operate in humans, this possibility has not yet been used by any working group. Such a possibility could have been envisaged in the case of d-limonene had there been sufficient evidence of its carcinogenicity in animals, since there are data suggesting that α2-microglobulin production in male rat kidney is linked to the renal tumours observed.
Among the many chemicals nominated as priorities by an ad-hoc working group in December 1993, some common postulated intrinsic mechanisms of action appeared or certain classes of agents based upon their biological properties were identified. The working group recommended that before evaluations are made on such agents as peroxisome proliferators, fibres, dusts and thyrostatic agents within the Monographs programme, special ad-hoc groups should be convened to discuss the latest state of the art on their particular mechanisms of action.
As in many other countries, risk due to exposure to chemicals is regulated in Japan according to the category of chemicals concerned, as listed in table 1. The governmental ministry or agency in charge varies. In the case of industrial chemicals in general, the major law that applies is the Law Concerning Examination and Regulation of Manufacture, Etc. of Chemical Substances, or Chemical Substances Control Law (CSCL) for short. The agencies in charge are the Ministry of International Trade and Industry and the Ministry of Health and Welfare. In addition, the Labour Safety and Hygiene Law (by the Ministry of Labour) provides that industrial chemicals should be examined for possible mutagenicity and, if the chemical in concern is found to be mutagenic, the exposure of workers to the chemical should be minimized by enclosure of production facilities, installation of local exhaust systems, use of protective equipment, and so on.
Table 1. Regulation of chemical substances by laws, Japan
|Food and food additives||Foodstuff Hygiene Law||MHW|
|Narcotics||Narcotics Control Law||MHW|
|Agricultural chemicals||Agricultural Chemicals Control Law||MAFF|
|Industrial chemicals||Chemical Substances Control Law||MHW & MITI|
|All chemicals except for radioactive substances||Law concerning Regulation of
House-Hold Products Containing
Poisonous and Deleterious
Substances Control Law
Labour Safety and Hygiene Law
|Radioactive substances||Law concerning Radioactive Substances||STA|
Abbreviations: MHW—Ministry of Health and Welfare; MAFF—Ministry of Agriculture, Forestry and Fishery; MITI—Ministry of International Trade and Industry; MOL—Ministry of Labour; STA—Science and Technology Agency.
Because hazardous industrial chemicals will be identified primarily by the CSCL, the framework of tests for hazard identification under CSCL will be described in this section.
The Concept of the Chemical SubstanceControl Law
The original CSCL was passed by the Diet (the parliament of Japan) in 1973 and took effect on 16 April 1974. The basic motivation for the Law was the prevention of environmental pollution and resulting human health effects by PCBs and PCB-like substances. PCBs are characterized by (1) persistency in the environment (poorly biodegradable), (2) increasing concentration as one goes up the food chain (or food web) (bioaccumulation) and (3) chronic toxicity in humans. Accordingly, the Law mandated that each industrial chemical be examined for such characteristics prior to marketing in Japan. In parallel with the passage of the Law, the Diet decided that the Environment Agency should monitor the general environment for possible chemical pollution. The Law was then amended by the Diet in 1986 (the amendment taking effect in 1987) in order to harmonize with actions of the OECD regarding health and the environment, the lowering of non-tariff barriers in international trade and especially the setting of a minimum premarketing set of data (MPD) and related test guidelines. The amendment was also a reflection of observation at the time, through monitoring of the environment, that chemicals such as trichloroethylene and tetrachloroethylene, which are not highly bioaccumulating although poorly biodegradable and chronically toxic, can pollute the environment; these chemical substances were detected in groundwater nationwide.
The Law classifies industrial chemicals into two categories: existing chemicals and new chemicals. The existing chemicals are those listed in the “Existing Chemicals Inventory” (established with the passage of the original Law) and number about 20,000, the number depending on the way some chemicals are named in the inventory. Chemicals not in the inventory are called new chemicals. The government is responsible for hazard identification of the existing chemicals, whereas the company or other entity that wishes to introduce a new chemical into the market in Japan is responsible for hazard identification of the new chemical. Two governmental ministries, the Ministry of Health and Welfare (MHW) and the Ministry of International Trade and Industry (MITI), are in charge of the Law, and the Environment Agency can express its opinion when necessary. Radioactive substances, specified poisons, stimulants and narcotics are excluded because they are regulated by other laws.
Test System Under CSCL
The flow scheme of examination is depicted in figure 1, which is a stepwise system in principle. All chemicals (for exceptions, see below) should be examined for biodegradability in vitro. In case the chemical is readily biodegradable, it is considered “safe”. Otherwise, the chemical is then examined for bioaccumulation. If it is found to be “highly accumulating,” full toxicity data are requested, based on which the chemical will be classified as a “Class 1 specified chemical substance” when toxicity is confirmed, or a “safe” one otherwise. The chemical with no or low accumulation will be subject to toxicity screening tests, which consist of mutagenicity tests and 28-day repeated dosing to experimental animals (for details, see table 2). After comprehensive evaluation of the toxicity data, the chemical will be classified as a “Designated chemical substance” if the data indicate toxicity. Otherwise, it is considered “safe”. When other data suggest that there is a great possibility of environmental pollution with the chemical in concern, full toxicity data are requested, from which the designated chemical will be reclassified as “Class 2 specified chemical substance” when positive. Otherwise, it is considered “safe”. Toxicological and ecotoxicological characteristics of “Class 1 specific chemical substance,” “Class 2 specific chemical substance” and “Designated chemical substance” are listed in table 3 together with outlines of regulatory actions.
|Biodegradation||For 2 weeks in principle, in vitro, with activated
|Bioaccumulation||For 8 weeks in principle, with carp|
Ames’ test and test with E. coli, ± S9 mix
CHL cells, etc., ±S9 mix
|28-day repeated dosing||Rats, 3 dose levels plus control for NOEL,
2 weeks recovery test at the highest dose level in addition
Table 3. Characteristics of classified chemical substances and regulations under the Japanese Chemical Substances Control Law
specified chemical substances
|Authorization to manufacture or import necessary1
Restriction in use
specified chemical substances
Non- or low bioaccumulation Chronic toxicity
Suspected environmental pollution
|Notification on scheduled manu-facturing or import quantity
Technical guideline to prevent pollution/heath effects
|Designated chemical substances||Nonbiodegradability
Non- or low bioaccumulation
Suspected chronic toxicity
|Report on manufacturing or import quantity
Study and literature survey
1 No authorization in practice.
Testing is not required for a new chemical with a limited use amount (i.e., less than 1,000 kg/company/year and less than 1,000 kg/year for all of Japan). Polymers are examined following the high molecular-weight compound flow scheme, which is developed with an assumption that chances are remote for absorption into the body when the chemical has a molecular weight of greater than 1,000 and is stable in the environment.
Results of Classification of Industrial Chemicals,as of 1996
In the 26 years from the time CSCL went into effect in 1973 to the end of 1996, 1,087 existing chemical items were examined under the original and amended CSCL. Among the 1,087, nine items (some are identified by generic names) were classified as “Class 1 specified chemical substance”. Among those remaining, 36 were classified as “designated”, of which 23 were reclassified as “Class 2 specified chemical substance” and another 13 remained as “designated”. The names of Class 1 and 2 specified chemical substances are listed in figure 2. It is clear from the table that most of the Class 1 chemicals are organochlorine pesticides in addition to PCB and its substitute, except for one seaweed killer. A majority of the Class 2 chemicals are seaweed killers, with the exceptions of three once widely used chlorinated hydrocarbon solvents.
In the same period from 1973 to the end of 1996, about 2,335 new chemicals were submitted for approval, of which 221 (about 9.5%) were identified as “designated”, but none as Class 1 or 2 chemicals. Other chemicals were considered “safe” and approved for manufacturing or import.
Toxicology plays a major role in the development of regulations and other occupational health policies. In order to prevent occupational injury and illness, decisions are increasingly based upon information obtainable prior to or in the absence of the types of human exposures that would yield definitive information on risk such as epidemiology studies. In addition, toxicological studies, as described in this chapter, can provide precise information on dose and response under the controlled conditions of laboratory research; this information is often difficult to obtain in the uncontrolled setting of occupational exposures. However, this information must be carefully evaluated in order to estimate the likelihood of adverse effects in humans, the nature of these adverse effects, and the quantitative relationship between exposures and effects.
Considerable attention has been given in many countries, since the 1980s, to developing objective methods for utilizing toxicological information in regulatory decision-making. Formal methods, frequently referred to as risk assessment, have been proposed and utilized in these countries by both governmental and non-governmental entities. Risk assessment has been varyingly defined; fundamentally it is an evaluative process that incorporates toxicology, epidemiology and exposure information to identify and estimate the probability of adverse effects associated with exposures to hazardous substances or conditions. Risk assessment may be qualitative in nature, indicating the nature of an adverse effect and a general estimate of likelihood, or it may be quantitative, with estimates of numbers of affected persons at specific levels of exposure. In many regulatory systems, risk assessment is undertaken in four stages: hazard identification, the description of the nature of the toxic effect; dose-response evaluation, a semi-quantitative or quantitative analysis of the relationship between exposure (or dose) and severity or likelihood of toxic effect; exposure assessment, the evaluation of information on the range of exposures likely to occur for populations in general or for subgroups within populations; risk characterization, the compilation of all the above information into an expression of the magnitude of risk expected to occur under specified exposure conditions (see NRC 1983 for a statement of these principles).
In this section, three approaches to risk assessment are presented as illustrative. It is impossible to provide a comprehensive compendium of risk assessment methods used throughout the world, and these selections should not be taken as prescriptive. It should be noted that there are trends towards harmonization of risk assessment methods, partly in response to provisions in the recent GATT accords. Two processes of international harmonization of risk assessment methods are currently underway, through the International Programme on Chemical Safety (IPCS) and the Organization for Economic Cooperation and Development (OECD). These organizations also maintain current information on national approaches to risk assessment.
Structure activity relationships (SAR) analysis is the utilization of information on the molecular structure of chemicals to predict important characteristics related to persistence, distribution, uptake and absorption, and toxicity. SAR is an alternative method of identifying potential hazardous chemicals, which holds promise of assisting industries and governments in prioritizing substances for further evaluation or for early-stage decision making for new chemicals. Toxicology is an increasingly expensive and resource-intensive undertaking. Increased concerns over the potential for chemicals to cause adverse effects in exposed human populations have prompted regulatory and health agencies to expand the range and sensitivity of tests to detect toxicological hazards. At the same time, the real and perceived burdens of regulation upon industry have provoked concerns for the practicality of toxicity testing methods and data analysis. At present, the determination of chemical carcinogenicity depends upon lifetime testing of at least two species, both sexes, at several doses, with careful histopathological analysis of multiple organs, as well as detection of preneoplastic changes in cells and target organs. In the United States, the cancer bioassay is estimated to cost in excess of $3 million (1995 dollars).
Even with unlimited financial resources, the burden of testing the approximately 70,000 existing chemicals produced in the world today would exceed the available resources of trained toxicologists. Centuries would be required to complete even a first tier evaluation of these chemicals (NRC 1984). In many countries ethical concerns over the use of animals in toxicity testing have increased, bringing additional pressures upon the uses of standard methods of toxicity testing. SAR has been widely used in the pharmaceutical industry to identify molecules with potential for beneficial use in treatment (Hansch and Zhang 1993). In environmental and occupational health policy, SAR is used to predict the dispersion of compounds in the physical-chemical environment and to screen new chemicals for further evaluation of potential toxicity. Under the US Toxic Substances Control Act (TSCA), the EPA has used since 1979 an SAR approach as a “first screen” of new chemicals in the premanufacture notification (PMN) process; Australia uses a similar approach as part of its new chemicals notification (NICNAS) procedure. In the US SAR analysis is an important basis for determining that there is a reasonable basis to conclude that manufacture, processing, distribution, use or disposal of the substance will present an unreasonable risk of injury to human health or the environment, as required by Section 5(f) of TSCA. On the basis of this finding, EPA can then require actual tests of the substance under Section 6 of TSCA.
Rationale for SAR
The scientific rationale for SAR is based upon the assumption that the molecular structure of a chemical will predict important aspects of its behaviour in physical-chemical and biological systems (Hansch and Leo 1979).
The SAR review process includes identification of the chemical structure, including empirical formulations as well as the pure compound; identification of structurally analogous substances; searching databases and literature for information on structural analogs; and analysis of toxicity and other data on structural analogs. In some rare cases, information on the structure of the compound alone can be sufficient to support some SAR analysis, based upon well-understood mechanisms of toxicity. Several databases on SAR have been compiled, as well as computer-based methods for molecular structure prediction.
With this information, the following endpoints can be estimated with SAR:
It should be noted that SAR methods do not exist for such important health endpoints as carcinogenicity, developmental toxicity, reproductive toxicity, neurotoxicity, immunotoxicity or other target organ effects. This is due to three factors: the lack of a large database upon which to test SAR hypotheses, lack of knowledge of structural determinants of toxic action, and the multiplicity of target cells and mechanisms that are involved in these endpoints (see “The United States approach to risk assessment of reproductive toxicants and neurotoxic agents”). Some limited attempts to utilize SAR for predicting pharmacokinetics using information on partition coefficients and solubility (Johanson and Naslund 1988). More extensive quantitative SAR has been done to predict P450-dependent metabolism of a range of compounds and binding of dioxin- and PCB-like molecules to the cytosolic “dioxin” receptor (Hansch and Zhang 1993).
SAR has been shown to have varying predictability for some of the endpoints listed above, as shown in table 1. This table presents data from two comparisons of predicted activity with actual results obtained by empirical measurement or toxicity testing. SAR as conducted by US EPA experts performed more poorly for predicting physical-chemical properties than for predicting biological activity, including biodegradation. For toxicity endpoints, SAR performed best for predicting mutagenicity. Ashby and Tennant (1991) in a more extended study also found good predictability of short-term genotoxicity in their analysis of NTP chemicals. These findings are not surprising, given current understanding of molecular mechanisms of genotoxicity (see “Genetic toxicology”) and the role of electrophilicity in DNA binding. In contrast, SAR tended to underpredict systemic and subchronic toxicity in mammals and to overpredict acute toxicity to aquatic organisms.
Table 1. Comparison of SAR and test data: OECD/NTP analyses
|Endpoint||Agreement (%)||Disagreement (%)||Number|
|Acute mammalian toxicity (LD50 )||80||201||142|
|Carcinogenicity3 : Two year bioassay||72–954||—||301|
Source: Data from OECD, personal communication C. Auer ,US EPA. Only those endpoints for which comparable SAR predictions and actual test data were available were used in this analysis. NTP data are from Ashby and Tennant 1991.
1 Of concern was the failure by SAR to predict acute toxicity in 12% of the chemicals tested.
2 OECD data, based on Ames test concordance with SAR
3 NTP data, based on genetox assays compared to SAR predictions for several classes of “structurally alerting chemicals”.
4 Concordance varies with class; highest concordance was with aromatic amino/nitro compounds; lowest with “miscellaneous” structures.
For other toxic endpoints, as noted above, SAR has less demonstrable utility. Mammalian toxicity predictions are complicated by the lack of SAR for toxicokinetics of complex molecules. Nevertheless, some attempts have been made to propose SAR principles for complex mammalian toxicity endpoints (for instance, see Bernstein (1984) for an SAR analysis of potential male reproductive toxicants). In most cases, the database is too small to permit rigorous testing of structure-based predictions.
At this point it may be concluded that SAR may be useful mainly for prioritizing the investment of toxicity testing resources or for raising early concerns about potential hazard. Only in the case of mutagenicity is it likely that SAR analysis by itself can be utilized with reliability to inform other decisions. For no endpoint is it likely that SAR can provide the type of quantitative information required for risk assessment purposes as discussed elsewhere in this chapter and Encyclopaedia.
The emergence of sophisticated technologies in molecular and cellular biology has spurred a relatively rapid evolution in the life sciences, including toxicology. In effect, the focus of toxicology is shifting from whole animals and populations of whole animals to the cells and molecules of individual animals and humans. Since the mid-1980s, toxicologists have begun to employ these new methodologies in assessing the effects of chemicals on living systems. As a logical progression, such methods are being adapted for the purposes of toxicity testing. These scientific advances have worked together with social and economic factors to effect change in the evaluation of product safety and potential risk.
Economic factors are specifically related to the volume of materials that must be tested. A plethora of new cosmetics, pharmaceuticals, pesticides, chemicals and household products is introduced into the market every year. All of these products must be evaluated for their potential toxicity. In addition, there is a backlog of chemicals already in use that have not been adequately tested. The enormous task of obtaining detailed safety information on all of these chemicals using traditional whole animal testing methods would be costly in terms of both money and time, if it could even be accomplished.
There are also societal issues that relate to public health and safety, as well as increasing public concern about the use of animals for product safety testing. With regard to human safety, public interest and environmental advocacy groups have placed significant pressure on government agencies to apply more stringent regulations on chemicals. A recent example of this has been a movement by some environmental groups to ban chlorine and chlorine-containing compounds in the United States. One of the motivations for such an extreme action lies in the fact that most of these compounds have never been adequately tested. From a toxicological perspective, the concept of banning a whole class of diverse chemicals based simply on the presence of chlorine is both scientifically unsound and irresponsible. Yet, it is understandable that from the public’s perspective, there must be some assurance that chemicals released into the environment do not pose a significant health risk. Such a situation underscores the need for more efficient and rapid methods to assess toxicity.
The other societal concern that has impacted the area of toxicity testing is animal welfare. The growing number of animal protection groups throughout the world have voiced considerable opposition to the use of whole animals for product safety testing. Active campaigns have been waged against manufacturers of cosmetics, household and personal care products and pharmaceuticals in attempts to stop animal testing. Such efforts in Europe have resulted in the passage of the Sixth Amendment to Directive 76/768/EEC (the Cosmetics Directive). The consequence of this Directive is that cosmetic products or cosmetic ingredients that have been tested in animals after January 1, 1998 cannot be marketed in the European Union, unless alternative methods are insufficiently validated. While this Directive has no jurisdiction over the sale of such products in the United States or other countries, it will significantly affect those companies that have international markets that include Europe.
The concept of alternatives, which forms the basis for the development of tests other than those on whole animals, is defined by the three Rs: reduction in the numbers of animals used; refinement of protocols so that animals experience less stress or discomfort; and replacement of current animal tests with in vitro tests (i.e., tests done outside of the living animal), computer models or test on lower vertebrate or invertebrate species. The three Rs were introduced in a book published in 1959 by two British scientists, W.M.S. Russell and Rex Burch, The Principles of Humane Experimental Technique. Russell and Burch maintained that the only way in which valid scientific results could be obtained is through the humane treatment of animals, and believed that methods should be developed to reduce animal use and ultimately replace it. Interestingly, the principles outlined by Russell and Burch received little attention until the resurgence of the animal welfare movement in the mid-1970s. Today the concept of the three Rs is very much in the forefront with regard to research, testing and education.
In summary, the development of in vitro test methodologies has been influenced by a variety of factors that have converged over the last ten to 20 years. It is difficult to ascertain if any of these factors alone would have had such a profound effect on toxicity testing strategies.
Concept of In Vitro Toxicity Tests
This section will focus solely on in vitro methods for evaluating toxicity, as one of the alternatives to whole-animal testing. Additional non-animal alternatives such as computer modelling and quantitative structure-activity relationships are discussed in other articles of this chapter.
In vitro studies are generally conducted in animal or human cells or tissues outside of the body. In vitro literally means “in glass”, and refers to procedures carried out on living material or components of living material cultured in petri dishes or in test tubes under defined conditions. These may be contrasted with in vivo studies, or those carried out “in the living animal”. While it is difficult, if not impossible, to project the effects of a chemical on a complex organism when the observations are confined to a single type of cells in a dish, in vitro studies do provide a significant amount of information about intrinsic toxicity as well as cellular and molecular mechanisms of toxicity. In addition, they offer many advantages over in vivo studies in that they are generally less expensive and they may be conducted under more controlled conditions. Furthermore, despite the fact that small numbers of animals are still needed to obtain cells for in vitro cultures, these methods may be considered reduction alternatives (since many fewer animals are used compared to in vivo studies) and refinement alternatives (because they eliminate the need to subject the animals to the adverse toxic consequences imposed by in vivo experiments).
In order to interpret the results of in vitro toxicity tests, determine their potential usefulness in assessing toxicity and relate them to the overall toxicological process in vivo, it is necessary to understand which part of the toxicological process is being examined. The entire toxicological process consists of events that begin with the organism’s exposure to a physical or chemical agent, progress through cellular and molecular interactions and ultimately manifest themselves in the response of the whole organism. In vitro tests are generally limited to the part of the toxicological process that takes place at the cellular and molecular level. The types of information that may be obtained from in vitro studies include pathways of metabolism, interaction of active metabolites with cellular and molecular targets and potentially measurable toxic endpoints that can serve as molecular biomarkers for exposure. In an ideal situation, the mechanism of toxicity of each chemical from exposure to organismal manifestation would be known, such that the information obtained from in vitro tests could be fully interpreted and related to the response of the whole organism. However, this is virtually impossible, since relatively few complete toxicological mechanisms have been elucidated. Thus, toxicologists are faced with a situation in which the results of an in vitro test cannot be used as an entirely accurate prediction of in vivo toxicity because the mechanism is unknown. However, frequently during the process of developing an in vitro test, components of the cellular and molecular mechanism(s) of toxicity are elucidated.
One of the key unresolved issues surrounding the development and implementation of in vitro tests is related to the following consideration: should they be mechanistically based or is it sufficient for them to be descriptive? It is inarguably better from a scientific perspective to utilize only mechanistically based tests as replacements for in vivo tests. However in the absence of complete mechanistic knowledge, the prospect of developing in vitro tests to completely replace whole animal tests in the near future is almost nil. This does not, however, rule out the use of more descriptive types of assays as early screening tools, which is the case presently. These screens have resulted in a significant reduction in animal use. Therefore, until such time as more mechanistic information is generated, it may be necessary to employ to a more limited extent, tests whose results simply correlate well with those obtained in vivo.
In Vitro Tests for Cytotoxicity
In this section, several in vitro tests that have been developed to assess a chemical’s cytotoxic potential will be described. For the most part, these tests are easy to perform and analysis can be automated. One commonly used in vitro test for cytotoxicity is the neutral red assay. This assay is done on cells in culture, and for most applications, the cells can be maintained in culture dishes that contain 96 small wells, each 6.4mm in diameter. Since each well can be used for a single determination, this arrangement can accommodate multiple concentrations of the test chemical as well as positive and negative controls with a sufficient number of replicates for each. Following treatment of the cells with various concentrations of the test chemical ranging over at least two orders of magnitude (e.g., from 0.01mM to 1mM), as well as positive and negative control chemicals, the cells are rinsed and treated with neutral red, a dye that can be taken up and retained only by live cells. The dye may be added upon removal of the test chemical to determine immediate effects, or it may be added at various times after the test chemical is removed to determine cumulative or delayed effects. The intensity of the colour in each well corresponds to the number of live cells in that well. The colour intensity is measured by a spectrophotometer which may be equipped with a plate reader. The plate reader is programmed to provide individual measurements for each of the 96 wells of the culture dish. This automated methodology permits the investigator to rapidly perform a concentration-response experiment and to obtain statistically useful data.
Another relatively simple assay for cytotoxicity is the MTT test. MTT (3[4,5-dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide) is a tetrazolium dye that is reduced by mitochondrial enzymes to a blue colour. Only cells with viable mitochondria will retain the ability to carry out this reaction; therefore the colour intensity is directly related to the degree of mitochondrial integrity. This is a useful test to detect general cytotoxic compounds as well as those agents that specifically target mitochondria.
The measurement of lactate dehydrogenase (LDH) activity is also used as a broad-based assay for cytotoxicity. This enzyme is normally present in the cytoplasm of living cells and is released into the cell culture medium through leaky cell membranes of dead or dying cells that have been adversely affected by a toxic agent. Small amounts of culture medium may be removed at various times after chemical treatment of the cells to measure the amount of LDH released and determine a time course of toxicity. While the LDH release assay is a very general assessment of cytotoxicity, it is useful because it is easy to perform and it may be done in real time.
There are many new methods being developed to detect cellular damage. More sophisticated methods employ fluorescent probes to measure a variety of intracellular parameters, such as calcium release and changes in pH and membrane potential. In general, these probes are very sensitive and may detect more subtle cellular changes, thus reducing the need to use cell death as an endpoint. In addition, many of these fluorescent assays may be automated by the use of 96-well plates and fluorescent plate readers.
Once data have been collected on a series of chemicals using one of these tests, the relative toxicities may be determined. The relative toxicity of a chemical, as determined in an in vitro test, may be expressed as the concentration that exerts a 50% effect on the endpoint response of untreated cells. This determination is referred to as the EC50 (Effective Concentration for 50% of the cells) and may be used to compare toxicities of different chemicals in vitro. (A similar term used in evaluating relative toxicity is IC50, indicating the concentration of a chemical that causes a 50% inhibition of a cellular process, e.g., the ability to take up neutral red.) It is not easy to assess whether the relative in vitro toxicity of the chemicals is comparable to their relative in vivo toxicities, since there are so many confounding factors in the in vivo system, such as toxicokinetics, metabolism, repair and defence mechanisms. In addition, since most of these assays measure general cytotoxicity endpoints, they are not mechanistically based. Therefore, agreement between in vitro and in vivo relative toxicities is simply correlative. Despite the numerous complexities and difficulties in extrapolating from in vitro to in vivo, these in vitro tests are proving to be very valuable because they are simple and inexpensive to perform and may be used as screens to flag highly toxic drugs or chemicals at early stages of development.
Target Organ Toxicity
In vitro tests can also be used to assess specific target organ toxicity. There are a number of difficulties associated with designing such tests, the most notable being the inability of in vitro systems to maintain many of the features of the organ in vivo. Frequently, when cells are taken from animals and placed into culture, they tend either to degenerate quickly and/or to dedifferentiate, that is, lose their organ-like functions and become more generic. This presents a problem in that within a short period of time, usually a few days, the cultures are no longer useful for assessing organ-specific effects of a toxin.
Many of these problems are being overcome because of recent advances in molecular and cellular biology. Information that is obtained about the cellular environment in vivo may be utilized in modulating culture conditions in vitro. Since the mid-1980s, new growth factors and cytokines have been discovered, and many of these are now available commercially. Addition of these factors to cells in culture helps to preserve their integrity and may also help to retain more differentiated functions for longer periods of time. Other basic studies have increased the knowledge of the nutritional and hormonal requirements of cells in culture, so that new media may be formulated. Recent advances have also been made in identifying both naturally occurring and artificial extracellular matrices on which cells may be cultured. Culture of cells on these different matrices can have profound effects on both their structure and function. A major advantage derived from this knowledge is the ability to intricately control the environment of cells in culture and individually examine the effects of these factors on basic cell processes and on their responses to different chemical agents. In short, these systems can provide great insight into organ-specific mechanisms of toxicity.
Many target organ toxicity studies are conducted in primary cells, which by definition are freshly isolated from an organ, and usually exhibit a finite lifetime in culture. There are many advantages to having primary cultures of a single cell type from an organ for toxicity assessment. From a mechanistic perspective, such cultures are useful for studying specific cellular targets of a chemical. In some instances, two or more cell types from an organ may be cultured together, and this provides an added advantage of being able to look at cell-cell interactions in response to a toxin. Some co-culture systems for skin have been engineered so that they form a three dimensional structure resembling skin in vivo. It is also possible to co-culture cells from different organs—for example, liver and kidney. This type of culture would be useful in assessing the effects specific to kidney cells, of a chemical that must be bioactivated in the liver.
Molecular biological tools have also played an important role in the development of continuous cell lines that can be useful for target organ toxicity testing. These cell lines are generated by transfecting DNA into primary cells. In the transfection procedure, the cells and the DNA are treated such that the DNA can be taken up by the cells. The DNA is usually from a virus and contains a gene or genes that, when expressed, allow the cells to become immortalized (i.e., able to live and grow for extended periods of time in culture). The DNA can also be engineered so that the immortalizing gene is controlled by an inducible promoter. The advantage of this type of construct is that the cells will divide only when they receive the appropriate chemical stimulus to allow expression of the immortalizing gene. An example of such a construct is the large T antigen gene from Simian Virus 40 (SV40) (the immortalizing gene), preceded by the promoter region of the metallothionein gene, which is induced by the presence of a metal in the culture medium. Thus, after the gene is transfected into the cells, the cells may be treated with low concentrations of zinc to stimulate the MT promoter and turn on the expression of the T antigen gene. Under these conditions, the cells proliferate. When zinc is removed from the medium, the cells stop dividing and under ideal conditions return to a state where they express their tissue-specific functions.
The ability to generate immortalized cells combined with the advances in cell culture technology have greatly contributed to the creation of cell lines from many different organs, including brain, kidney and liver. However, before these cell lines may be used as a surrogate for the bona fide cell types, they must be carefully characterized to determine how “normal” they really are.
Other in vitro systems for studying target organ toxicity involve increasing complexity. As in vitro systems progress in complexity from single cell to whole organ culture, they become more comparable to the in vivo milieu, but at the same time they become much more difficult to control given the increased number of variables. Therefore, what may be gained in moving to a higher level of organization can be lost in the inability of the researcher to control the experimental environment. Table 1 compares some of the characteristics of various in vitro systems that have been used to study hepatotoxicity.
Table 1. Comparison of in vitro systems for hepatotoxicity studies
(level of interaction)
|Ability to retain liver-specific functions||Potential duration of culture||Ability to control environment|
|Immortalized cell lines||some cell to cell (varies with cell line)||poor to good (varies with cell line)||indefinite||excellent|
|Primary hepatocyte cultures||cell to cell||fair to excellent (varies with culture conditions)||days to weeks||excellent|
|Liver cell co-cultures||cell to cell (between the same and different cell types)||good to excellent||weeks||excellent|
|Liver slices||cell to cell (among all cell types)||good to excellent||hours to days||good|
|Isolated, perfused liver||cell to cell (among all cell types), and intra-organ||excellent||hours||fair|
Precision-cut tissue slices are being used more extensively for toxicological studies. There are new instruments available that enable the researcher to cut uniform tissue slices in a sterile environment. Tissue slices offer some advantage over cell culture systems in that all of the cell types of the organ are present and they maintain their in vivo architecture and intercellular communication. Thus, in vitro studies may be conducted to determine the target cell type within an organ as well as to investigate specific target organ toxicity. A disadvantage of the slices is that they degenerate rapidly after the first 24 hours of culture, mainly due to poor diffusion of oxygen to the cells on the interior of the slices. However, recent studies have indicated that more efficient aeration may be achieved by gentle rotation. This, together with the use of a more complex medium, allows the slices to survive for up to 96 hours.
Tissue explants are similar in concept to tissue slices and may also be used to determine the toxicity of chemicals in specific target organs. Tissue explants are established by removing a small piece of tissue (for teratogenicity studies, an intact embryo) and placing it into culture for further study. Explant cultures have been useful for short-term toxicity studies including irritation and corrosivity in skin, asbestos studies in trachea and neurotoxicity studies in brain tissue.
Isolated perfused organs may also be used to assess target organ toxicity. These systems offer an advantage similar to that of tissue slices and explants in that all cell types are present, but without the stress to the tissue introduced by the manipulations involved in preparing slices. In addition, they allow for the maintenance of intra-organ interactions. A major disadvantage is their short-term viability, which limits their use for in vitro toxicity testing. In terms of serving as an alternative, these cultures may be considered a refinement since the animals do not experience the adverse consequences of in vivo treatment with toxicants. However, their use does not significantly decrease the numbers of animals required.
In summary, there are several types of in vitro systems available for assessing target organ toxicity. It is possible to acquire much information about mechanisms of toxicity using one or more of these techniques. The difficulty remains in knowing how to extrapolate from an in vitro system, which represents a relatively small part of the toxicological process, to the whole process occurring in vivo.
In Vitro Tests for Ocular Irritation
Perhaps the most contentious whole-animal toxicity test from an animal welfare perspective is the Draize test for eye irritation, which is conducted in rabbits. In this test, a small fixed dose of a chemical is placed in one of the rabbit’s eyes while the other eye is used as a control. The degree of irritation and inflammation is scored at various times after exposure. A major effort is being made to develop methodologies to replace this test, which has been criticized not only for humane reasons, but also because of the subjectivity of the observations and variability of the results. It is interesting to note that despite the harsh criticism the Draize test has received, it has proven to be remarkably successful in predicting human eye irritants, particularly slightly to moderately irritating substances, that are difficult to identify by other methods. Thus, the demands on in vitro alternatives are great.
The quest for alternatives to the Draize test is a complicated one, albeit one that is predicted to be successful. Numerous in vitro and other alternatives have been developed and in some cases they have been implemented. Refinement alternatives to the Draize test, which by definition, are less painful or distressful to the animals, include the Low Volume Eye Test, in which smaller amounts of test materials are placed in the rabbits’ eyes, not only for humane reasons, but to more closely mimic the amounts to which people may actually be accidentally exposed. Another refinement is that substances which have a pH less than 2 or greater than 11.5 are no longer tested in animals since they are known to be severely irritating to the eye.
Between 1980 and 1989, there has been an estimated 87% decline in the number of rabbits used for eye irritation testing of cosmetics. In vitro tests have been incorporated as part of a tier-testing approach to bring about this vast reduction in whole-animal tests. This approach is a multi-step process that begins with a thorough examination of the historical eye irritation data and physical and chemical analysis of the chemical to be evaluated. If these two processes do not yield enough information, then a battery of in vitro tests is performed. The additional data obtained from the in vitro tests might then be sufficient to assess the safety of the substance. If not, then the final step would be to perform limited in vivo tests. It is easy to see how this approach can eliminate or at least drastically reduce the numbers of animals needed to predict the safety of a test substance.
The battery of in vitro tests that is used as part of this tier-testing strategy depends upon the needs of the particular industry. Eye irritation testing is done by a wide variety of industries from cosmetics to pharmaceuticals to industrial chemicals. The type of information required by each industry varies and therefore it is not possible to define a single battery of in vitro tests. A test battery is generally designed to assess five parameters: cytotoxicity, changes in tissue physiology and biochemistry, quantitative structure-activity relationships, inflammation mediators, and recovery and repair. An example of a test for cytotoxicity, which is one possible cause for irritation, is the neutral red assay using cultured cells (see above). Changes in cellular physiology and biochemistry resulting from exposure to a chemical may be assayed in cultures of human corneal epithelial cells. Alternatively, investigators have also used intact or dissected bovine or chicken eyeballs obtained from slaughterhouses. Many of the endpoints measured in these whole organ cultures are the same as those measured in vivo, such as corneal opacity and corneal swelling.
Inflammation is frequently a component of chemical-induced eye injury, and there are a number of assays available to examine this parameter. Various biochemical assays detect the presence of mediators released during the inflammatory process such as arachidonic acid and cytokines. The chorioallantoic membrane (CAM) of the hen’s egg may also be used as an indicator of inflammation. In the CAM assay, a small piece of the shell of a ten-to-14-day chick embryo is removed to expose the CAM. The chemical is then applied to the CAM and signs of inflammation, such as vascular hemorrhaging, are scored at various times thereafter.
One of the most difficult in vivo processes to assess in vitro is recovery and repair of ocular injury. A newly developed instrument, the silicon microphysiometer, measures small changes in extracellular pH and can been used to monitor cultured cells in real time. This analysis has been shown to correlate fairly well with in vivo recovery and has been used as an in vitro test for this process. This has been a brief overview of the types of tests being employed as alternatives to the Draize test for ocular irritation. It is likely that within the next several years a complete series of in vitro test batteries will be defined and each will be validated for its specific purpose.
The key to regulatory acceptance and implementation of in vitro test methodologies is validation, the process by which the credibility of a candidate test is established for a specific purpose. Efforts to define and coordinate the validation process have been made both in the United States and in Europe. The European Union established the European Centre for the Validation of Alternative Methods (ECVAM) in 1993 to coordinate efforts there and to interact with American organizations such as the Johns Hopkins Centre for Alternatives to Animal Testing (CAAT), an academic centre in the United States, and the Interagency Coordinating Committee for the Validation of Alternative Methods (ICCVAM), composed of representatives from the National Institutes of Health, the US Environmental Protection Agency, the US Food and Drug Administration and the Consumer Products Safety Commission.
Validation of in vitro tests requires substantial organization and planning. There must be consensus among government regulators and industrial and academic scientists on acceptable procedures, and sufficient oversight by a scientific advisory board to ensure that the protocols meet set standards. The validation studies should be performed in a series of reference laboratories using calibrated sets of chemicals from a chemical bank and cells or tissues from a single source. Both intralaboratory repeatability and interlaboratory reproducibility of a candidate test must be demonstrated and the results subjected to appropriate statistical analysis. Once the results from the different components of the validation studies have been compiled, the scientific advisory board can make recommendations on the validity of the candidate test(s) for a specific purpose. In addition, results of the studies should be published in peer-reviewed journals and placed in a database.
The definition of the validation process is currently a work in progress. Each new validation study will provide information useful to the design of the next study. International communication and cooperation are essential for the expeditious development of a widely acceptable series of protocols, particularly given the increased urgency imposed by the passage of the EC Cosmetics Directive. This legislation may indeed provide the needed impetus for a serious validation effort to be undertaken. It is only through completion of this process that the acceptance of in vitro methods by the various regulatory communities can commence.
This article has provided a broad overview of the current status of in vitro toxicity testing. The science of in vitro toxicology is relatively young, but it is growing exponentially. The challenge for the years ahead is to incorporate the mechanistic knowledge generated by cellular and molecular studies into the vast inventory of in vivo data to provide a more complete description of toxicological mechanisms as well as to establish a paradigm by which in vitro data may be used to predict toxicity in vivo. It will only be through the concerted efforts of toxicologists and government representatives that the inherent value of these in vitro methods can be realized.
Genetic toxicity assessment is the evaluation of agents for their ability to induce any of three general types of changes (mutations) in the genetic material (DNA): gene, chromosomal and genomic. In organisms such as humans, the genes are composed of DNA, which consists of individual units called nucleotide bases. The genes are arranged in discrete physical structures called chromosomes. Genotoxicity can result in significant and irreversible effects upon human health. Genotoxic damage is a critical step in the induction of cancer and it can also be involved in the induction of birth defects and foetal death. The three classes of mutations mentioned above can occur within either of the two types of tissues possessed by organisms such as humans: sperm or eggs (germ cells) and the remaining tissue (somatic cells).
Assays that measure gene mutation are those that detect the substitution, addition or deletion of nucleotides within a gene. Assays that measure chromosomal mutation are those that detect breaks or chromosomal rearrangements involving one or more chromosomes. Assays that measure genomic mutation are those that detect changes in the number of chromosomes, a condition called aneuploidy. Genetic toxicity assessment has changed considerably since the development by Herman Muller in 1927 of the first assay to detect genotoxic (mutagenic) agents. Since then, more than 200 assays have been developed that measure mutations in DNA; however, fewer than ten assays are used commonly today for genetic toxicity assessment. This article reviews these assays, describes what they measure, and explores the role of these assays in toxicity assessment.
Identification of Cancer HazardsPrior to the Development of the Fieldof Genetic Toxicology
Genetic toxicology has become an integral part of the overall risk assessment process and has gained in stature in recent times as a reliable predictor for carcinogenic activity. However, prior to the development of genetic toxicology (before 1970), other methods were and are still being used to identify potential cancer hazards to humans. There are six major categories of methods currently used for identifying human cancer risks: epidemiological studies, long-term in vivo bioassays, mid-term in vivo bioassays, short-term in vivo and in vitro bioassays, artificial intelligence (structure-activity), and mechanism-based inference.
Table 1. Advantages and disadvantages of current methods for identifying human cancer risks
|Epidemiological studies||(1) humans are ultimate indicators of disease;
(2) evaluate sensitive or susceptible populations;
(3) occupational exposure cohorts; (4) environmental sentinel alerts
|(1) generally retrospective (death certificates, recall biases, etc.); (2) insensitive, costly, lengthy; (3) reliable exposure data sometimes unavailable or difficult to obtain; (4) combined, multiple and complex exposures; lack of appropriate control cohorts; (5) experiments on humans not done; (6) cancer detection, not prevention|
|Long-term in vivo bioassays||(1) prospective and retrospective (validation) evaluations; (2) excellent correlation with identified human carcinogens; (3) exposure levels and conditions known; (4) identifies chemical toxicity and carcinogenicity effects; (5) results obtained relatively quickly; (6) qualitative comparisons among chemical classes; (7) integrative and interactive biologic systems related closely to humans||(1) rarely replicated, resource intensive; (3) limited facilities suitable for such experiments; (4) species extrapolation debate; (5) exposures used are often at levels far in excess of those experienced by humans; (6) single-chemical exposure does not mimic human exposures, which are generally to multiple chemicals simultaneously|
|Mid- and short-term in vivo and in vitro bioassays||(1) more rapid and less expensive than other assays; (2) large samples that are easily replicated;
(3) biologically meaningful end points are measured (mutation, etc.); (4) can be used as screening assays to select chemicals for long-term bioassays
|(1) in vitro not fully predictive of in vivo; (2) usually organism or organ specific; (3) potencies not comparable to whole animals or humans|
|Chemical structure–biological activity associations||(1) relatively easy, rapid, and inexpensive; (2) reliable for certain chemical classes (e.g., nitrosamines and benzidine dyes); (3) developed from biological data but not dependent on additional biological experimentation||(1) not “biological”; (2) many exceptions to formulated rules; (3) retrospective and rarely (but becoming) prospective|
|Mechanism-based inferences||(1) reasonably accurate for certain classes of chemicals; (2) permits refinements of hypotheses; (3) can orient risk assessments to sensitive populations||(1) mechanisms of chemical carcinogenesis undefined, multiple, and likely chemical or class specific; (2) may fail to highlight exceptions to general mechanisms|
Rationale and Conceptual Basisfor Genetic Toxicology Assays
Although the exact types and numbers of assays used for genetic toxicity assessment are constantly evolving and vary from country to country, the most common ones include assays for (1) gene mutation in bacteria and/or cultured mammalian cells and (2) chromosomal mutation in cultured mammalian cells and/or bone marrow within living mice. Some of the assays within this second category can also detect aneuploidy. Although these assays do not detect mutations in germ cells, they are used primarily because of the extra cost and complexity of performing germ-cell assays. Nonetheless, germ-cell assays in mice are used when information about germ-cell effects is desired.
Systematic studies over a 25-year period (1970-1995), especially at the US National Toxicology Program in North Carolina, have resulted in the use of a discrete number of assays for detecting the mutagenic activity of agents. The rationale for evaluating the usefulness of the assays was based on their ability to detect agents that cause cancer in rodents and that are suspected of causing cancer in humans (i.e., carcinogens). This is because studies during the past several decades have indicated that cancer cells contain mutations in certain genes and that many carcinogens are also mutagens. Thus, cancer cells are viewed as containing somatic-cell mutations, and carcinogenesis is viewed as a type of somatic-cell mutagenesis.
The genetic toxicity assays used most commonly today have been selected not only because of their large database, relatively low cost, and ease of performance, but because they have been shown to detect many rodent and, presumptively, human carcinogens. Consequently, genetic toxicity assays are used to predict the potential carcinogenicity of agents.
An important conceptual and practical development in the field of genetic toxicology was the recognition that many carcinogens were modified by enzymes within the body, creating altered forms (metabolites) that were frequently the ultimate carcinogenic and mutagenic form of the parent chemical. To duplicate this metabolism in a petri dish, Heinrich Malling showed that the inclusion of a preparation from rodent liver contained many of the enzymes necessary to perform this metabolic conversion or activation. Thus, many genetic toxicity assays performed in dishes or tubes (in vitro) employ the addition of similar enzyme preparations. Simple preparations are called S9 mix, and purified preparations are called microsomes. Some bacterial and mammalian cells have now been genetically engineered to contain some of the genes from rodents or humans that produce these enzymes, reducing the need to add S9 mix or microsomes.
Genetic Toxicology Assays and Techniques
The primary bacterial systems used for genetic toxicity screening are the Salmonella (Ames) mutagenicity assay and, to a much lesser extent, strain WP2 of Escherichia coli. Studies in the mid-1980s indicated that the use of only two strains of the Salmonella system (TA98 and TA100) were sufficient to detect approximately 90% of the known Salmonella mutagens. Thus, these two strains are used for most screening purposes; however, various other strains are available for more extensive testing.
These assays are performed in a variety of ways, but two general procedures are the plate-incorporation and liquid-suspension assays. In the plate-incorporation assay, the cells, the test chemical and (when desired) the S9 are added together into a liquefied agar and poured onto the surface of an agar petri plate. The top agar hardens within a few minutes, and the plates are incubated for two to three days, after which time mutant cells have grown to form visually detectable clusters of cells called colonies, which are then counted. The agar medium contains selective agents or is composed of ingredients such that only the newly mutated cells will grow. The liquid-incubation assay is similar, except the cells, test agent, and S9 are incubated together in liquid that does not contain liquefied agar, and then the cells are washed free of the test agent and S9 and seeded onto the agar.
Mutations in cultured mammalian cells are detected primarily in one of two genes: hprt and tk. Similar to the bacterial assays, mammalian cell lines (developed from rodent or human cells) are exposed to the test agent in plastic culture dishes or tubes and then are seeded into culture dishes that contain medium with a selective agent that permits only mutant cells to grow. The assays used for this purpose include the CHO/HPRT, the TK6, and the mouse lymphoma L5178Y/TK+/- assays. Other cell lines containing various DNA repair mutations as well as containing some human genes involved in metabolism are also used. These systems permit the recovery of mutations within the gene (gene mutation) as well as mutations involving regions of the chromosome flanking the gene (chromosomal mutation). However, this latter type of mutation is recovered to a much greater extent by the tk gene systems than by the hprt gene systems due to the location of the tk gene.
Similar to the liquid-incubation assay for bacterial mutagenicity, mammalian cell mutagenicity assays generally involve the exposure of the cells in culture dishes or tubes in the presence of the test agent and S9 for several hours. The cells are then washed, cultured for several more days to allow the normal (wild-type) gene products to be degraded and the newly mutant gene products to be expressed and accumulate, and then they are seeded into medium containing a selective agent that permits only the mutant cells to grow. Like the bacterial assays, the mutant cells grow into visually detectable colonies that are then counted.
Chromosomal mutation is identified primarily by cytogenetic assays, which involve exposing rodents and/or rodent or human cells in culture dishes to a test chemical, allowing one or more cell divisions to occur, staining the chromosomes, and then visually examining the chromosomes through a microscope to detect alterations in the structure or number of chromosomes. Although a variety of endpoints can be examined, the two that are currently accepted by regulatory agencies as being the most meaningful are chromosomal aberrations and a subcategory called micronuclei.
Considerable training and expertise are required to score cells for the presence of chromosomal aberrations, making this a costly procedure in terms of time and money. In contrast, micronuclei require little training, and their detection can be automated. Micronuclei appear as small dots within the cell that are distinct from the nucleus, which contains the chromosomes. Micronuclei result from either chromosome breakage or from aneuploidy. Because of the ease of scoring micronuclei compared to chromosomal aberrations, and because recent studies indicate that agents that induce chromosomal aberrations in the bone marrow of living mice generally induce micronuclei in this tissue, micronuclei are now commonly measured as an indication of the ability of an agent to induce chromosomal mutation.
Although germ-cell assays are used far less frequently than the other assays described above, they are indispensable in determining whether an agent poses a risk to the germ cells, mutations in which can lead to health effects in succeeding generations. The most commonly used germ-cell assays are in mice, and involve systems that detect (1) heritable translocations (exchanges) among chromosomes (heritable translocation assay), (2) gene or chromosomal mutations involving specific genes (visible or biochemical specific-locus assays), and (3) mutations that affect viability (dominant lethal assay). As with the somatic-cell assays, the working assumption with the germ-cell assays is that agents positive in these assays are presumed to be potential human germ-cell mutagens.
Current Status and Future Prospects
Recent studies have indicated that only three pieces of information were necessary to detect approximately 90% of a set of 41 rodent carcinogens (i.e., presumptive human carcinogens and somatic-cell mutagens). These included (1) knowledge of the chemical structure of agent, especially if it contains electrophilic moieties (see section on structure-activity relationships); (2) Salmonella mutagenicity data; and (3) data from a 90-day chronic toxicity assay in rodents (mice and rats). Indeed, essentially all of the IARC-declared human carcinogens are detectable as mutagens using just the Salmonella assay and the mouse-bone marrow micronucleus assay. The use of these mutagenicity assays for detecting potential human carcinogens is supported further by the finding that most human carcinogens are carcinogenic in both rats and mice (trans-species carcinogens) and that most trans- species carcinogens are mutagenic in Salmonella and/or induce micronuclei in mouse bone marrow.
With advances in DNA technology, the human genome project, and an improved understanding of the role of mutation in cancer, new genotoxicity assays are being developed that will likely be incorporated into standard screening procedures. Among these are the use of transgenic cells and rodents. Transgenic systems are those in which a gene from another species has been introduced into a cell or organism. For example, transgenic mice are now in experimental use that permit the detection of mutation in any organ or tissue of the animal, based on the introduction of a bacterial gene into the mouse. Bacterial cells, such as Salmonella, and mammalian cells (including human cell lines) are now available that contain genes involved in the metabolism of carcinogenic/mutagenic agents, such as the P450 genes. Molecular analysis of the actual mutations induced in the trans-gene within transgenic rodents, or within native genes such as hprt, or the target genes within Salmonella can now be performed, so that the exact nature of the mutations induced by the chemicals can be determined, providing insights into the mechanism of action of the chemical and allowing comparisons to mutations in humans presumptively exposed to the agent.
Molecular advances in cytogenetics now permit more detailed evaluation of chromosomal mutations. These include the use of probes (small pieces of DNA) that attach (hybridize) to specific genes. Rearrangements of genes on the chromosome can then be revealed by the altered location of the probes, which are fluorescent and easily visualized as colored sectors on the chromosomes. The single-cell gel electrophoresis assay for DNA breakage (commonly called the “comet” assay) permits the detection of DNA breaks within single cells and may become an extremely useful tool in combination with cytogenetic techniques for detecting chromosomal damage.
After many years of use and the generation of a large and systematically developed database, genetic toxicity assessment can now be done with just a few assays for relatively small cost in a short period of time (a few weeks). The data produced can be used to predict the ability of an agent to be a rodent and, presumptively, human carcinogen/somatic-cell mutagen. Such an ability makes it possible to limit the introduction into the environment of mutagenic and carcinogenic agents and to develop alternative, nonmutagenic agents. Future studies should lead to even better methods with greater predictivity than the current assays.