راغب السرجانى

الاثنين، 28 يونيو 2010

Human Genome Project


The Human Genome Project (HGP) was an international scientific research project with a primary goal to determine the sequence of chemical base pairs which make up DNA and to identify and map the approximately 20,000–25,000 genes of the human genome from both a physical and functional standpoint.[1]

The project began in 1990 and was initially headed by James D. Watson at the U.S. National Institutes of Health. A working draft of the genome was released in 2000 and a complete one in 2003, with further analysis still being published. A parallel project was conducted outside of government by the Celera Corporation. Most of the government-sponsored sequencing was performed in universities and research centers from the United States, the United Kingdom, Japan, France, Germany, China, India, Canada, and New Zealand. The mapping of human genes is an important step in the development of medicines and other aspects of health care.

While the objective of the Human Genome Project is to understand the genetic makeup of the human species, the project has also focused on several other nonhuman organisms such as E. coli, the fruit fly, and the laboratory mouse. It remains one of the largest single investigational projects in modern science.

The Human Genome Project originally aimed to map the nucleotides contained in a human haploid reference genome (more than three billion). Several groups have announced efforts to extend this to diploid human genomes including the International HapMap Project, Applied Biosystems, Perlegen, dsgdIllumina, JCVI, Personal Genome Project, and Roche-454.

The "genome" of any given individual (except for identical twins and cloned organisms) is unique; mapping "the human genome" involves sequencing multiple variations of each gene. The project did not study the entire DNA found in human cells; some heterochromatic areas (about 8% of the total genome) remain un-sequenced. dg
Project

=== Background ===sdgs The project began with the culmination of several years of work supported by the United States Department of Energy, in particular workshops in 1984 [2] and 1986 and a subsequent initiative of the US Department of Energy.[3] This 1987 report stated boldly, "The ultimate goal of this initiative is to understand the human genome" and "knowledge of the human as necessary to the continuing progress of medicine and other health sciences as knowledge of human anatomy has been for the present state of medicine." Candidate technologies were already being considered for the proposed undertaking at least as early as 1985.[4]

James D. Watson was head of the National Center for Human Genome Research at the National Institutes of Health (NIH) in the United States starting from 1988. Largely due to his disdgdgsagreement with his boss, Bernadine Healy, over the issue of patenting genes, Watson was forced to resign in 1992. He was replaced by Francis Collins in April 1993, and the name of the Center was changed to the National Human Genome Research Institute (NHGRI) in 1997.

The $3-billion project was formally founded in 1990 by the United States Department of Energy and the U.S. National Institutes of Health, and was expected to take 15 years. In addition to the United States, the international consortium comprised geneticists in the United Kingdom, France, Germany, Japan, China, and India.

Due to widespread international cooperation and advances in the field of genomics (especially in sequence analysis), as well as major advances in computing technology, a 'rough draft' of the genome was finished in 2000 (announced jointly by then US president Bill Clinton and the British Prime Minister Tony Blair on June 26, 2000).[5] This first available rough draft assembly of the genome was completed by the UCSC Genome Bioinformatics Group, primarily led by then graduate student Jim Kent. Ongodgdgsing sequencing led to the announcement of the essentially complete genome in April 2003, 2 years earlier than planned.[6] In May 2006, another milestone was passed on the way to completion of the project, when the sequence of the last chromosome was published in the journal Nature.[7]
State of completion

There are multiple definitions of the "complete sequence of the human genome". According to some of these definitions, the genome has already been completely sequenced, and according to other definitions, the genome has yet to be completely sequenced. There have been multiple popular press articles reporting that the genome was "complete." The genome has been completely sequenced using the definition employed by the International Human Genome Project. A graphical history of the human genome project shows that most of the human genome was complete by the end of 2003. However, there are a number of regions of the human genome that can be considered unfinished:

* First, the central regions of each chromosome, known as centromeres, are highly repetitive DNA sequences that are difficult to sequence using current technology. The centromeres are millions (possibly tens of millions) of base pairs long, and for the most part these are entirely un-sequenced.
* Second, the ends of the chromosomes, called telomeres, are also highly repetitive, and for most of the 46 chromosome ends these too are incomplete. It is not known precisely how much sequence remains before the telomeres of each chromosome are reached, but as with the centromeres, current technological restraints are prohibitive.
* Third, there are several loci in each individual's genome that contain members of multigene families that are difficult to disentangle with shotgun sequencing methods – these multigene families often encode proteins important for immune functions.
* Other than these regions, there remain a few dozen gaps scattered around the genome, some of them rather large, but there is hope that all these will be closed in the next couple of years.

In summary: the best estimates of total genome size indicate that about 92.3% of the genome has been completed [2] and it is likely that the centromeres and telomeres will remain un-sequenced until new technology is developed that facilitates their sequencing. Most of the remaining DNA is highly repetitive and unlikely to contain genes, but it cannot be truly known until it is entirely sequenced. Understanding the functions of all the genes and their regulation is far from complete. The roles of junk DNA, the evolution of the genome, the differences between individuals, and many other questions are still the subject of intense interest by laboratories all over the world.
Goals
The sequence of the human DNA is stored in databases available to anyone on the Internet. The U.S. National Center for Biotechnology Information (and sister organizations in Europe and Japan) house the gene sequence in a database known as GenBank, along with sequences of known and hypothetical genes and proteins. Other organizations such as the University of California, Santa Cruz[3], and Ensembl[4] present additional data and annotation and powerful tools for visualizing and searching it. Computer programs have been developed to analyze the data, because the data itself is difficult to interpret without such programs.

The process of identifying the boundaries between genes and other features in raw DNA sequence is called genome annotation and is the domain of bioinformatics. While expert biologists make the best annotators, their work proceeds slowly, and computer programs are increasingly used to meet the high-throughput demands of genome sequencing projects. The best current technologies for annotation make use of statistical models that take advantage of parallels between DNA sequences and human language, using concepts from computer science such as formal grammars.

Another, often overlooked, goal of the HGP is the study of its ethical, legal, and social implications. It is important to research these issues and find the most appropriate solutions before they become large dilemmas whose effect will manifest in the form of major political concerns.[citation needed]

All humans have unique gene sequences. Therefore the data published by the HGP does not represent the exact sequence of each and every individual's genome. It is the combined "reference genome" of a small number of anonymous donors. The HGP genome is a scaffold for future work in identifying differences among individuals. Most of the current effort in identifying differences among individuals involves single-nucleotide polymorphisms and the HapMap.
Interpretations

Key findings of the draft (2001) and complete (2004) genome sequences include[citation needed]

1. There are approx. 24,000 genes in human beings, the same range as in mice and twice that of roundworms. Understanding how these genes express themselves will provide clues to how diseases are caused.[citation needed]

2. Between 1.1% to 1.4% of the genome's sequence codes for proteins

3. The human genome has significantly more segmental duplications (near identical, repeated sections of DNA repeated) than other mammalian genomes. These sections may underlie the creation of new primate-specific genes

4. At the time when the draft sequence was published less than 7% of protein families appeared to be vertebrate specific
How it was accomplished
The Human Genome Project was started in 1989 with the goal of sequencing and identifying all three billion chemical units in the human genetic instruction set, finding the genetic roots of disease and then developing treatments. With the sequence in hand, the next step was to identify the genetic variants that increase the risk for common diseases like cancer and diabetes.

It was far too expensive at that time to think of sequencing patients’ whole genomes. So the National Institutes of Health embraced the idea for a "shortcut", which was to look just at sites on the genome where many people have a variant DNA unit. The theory behind the shortcut was that since the major diseases are common, so too would be the genetic variants that caused them. Natural selection keeps the human genome free of variants that damage health before children are grown, the theory held, but fails against variants that strike later in life, allowing them to become quite common. (In 2002 the National Institutes of Health started a $138 million project called the HapMap to catalog the common variants in European, East Asian and African genomes.)

The genome was broken into smaller pieces; approximately 150,000 base pairs in length. These pieces were then ligated into a type of vector known as "bacterial artificial chromosomes", or BACs, which are derived from bacterial chromosomes which have been genetically engineered. The vectors containing the genes can be inserted into bacteria where they are copied by the bacterial DNA replication machinery. Each of these pieces was then sequenced separately as a small "shotgun" project and then assembled. The larger, 150,000 base pairs go together to create chromosomes. This is known as the "hierarchical shotgun" approach, because the genome is first broken into relatively large chunks, which are then mapped to chromosomes before being selected for sequencing.

Funding came from the US government through the National Institutes of Health in the United States, and the UK charity, the Wellcome Trust, who funded the Sanger Institute (then the Sanger Centre) in Great Britain, as well as numerous other groups from around the world.

Human Genome Project has been called a Mega Project because of the following factors:

1. The human genome has approx. 3.3 billion base-pairs; if the cost of sequencing is US $3 per base-pair, then the approx. cost will be US $10 billion.

2. If the sequence obtained were to be stored in a typed form in books and if each page contains 1000 letters and each book contains 1000 pages, then 3300 such books would be needed to store the complete information.

However, if expressed in computer storage units (3.3 billion base-pairs) x (2 bits per pair) = 825 megabytes of raw data. Which is about the same size of one music CD. If further compressed, this data can be expected to fit in less than 20 Megabytes.
Public versus private approaches

In 1998, a similar, privately funded quest was launched by the American researcher Craig Venter, and his firm Celera Genomics. Venter was a scientist at the NIH during the early 1990s when the project was initiated. The $300,000,000 Celera effort was intended to proceed at a faster pace and at a fraction of the cost of the roughly $3 billion publicly funded project.

Celera used a technique called whole genome shotgun sequencing, employing pairwise end sequencing[8], which had been used to sequence bacterial genomes of up to six million base pairs in length, but not for anything nearly as large as the three billion base pair human genome.

Celera initially announced that it would seek patent protection on "only 200–300" genes, but later amended this to seeking "intellectual property protection" on "fully-characterized important structures" amounting to 100–300 targets. The firm eventually filed preliminary ("place-holder") patent applications on 6,500 whole or partial genes. Celera also promised to publish their findings in accordance with the terms of the 1996 "Bermuda Statement," by releasing new data annually (the HGP released its new data daily), although, unlike the publicly funded project, they would not permit free redistribution or scientific use of the data. The publicly funded competitor UC Santa Cruz was compelled to publish the first draft of the human genome before Celera for this reason. On July 7, 2000, the UCSC Genome Bioinformatics Group released a first working draft on the web. The scientific community downloaded one-half trillion bytes of information from the UCSC genome server in the first 24 hours of free and unrestricted access to the first ever assembled blueprint of our human species.[9].

In March 2000, President Clinton announced that the genome sequence could not be patented, and should be made freely available to all researchers. The statement sent Celera's stock plummeting and dragged down the biotechnology-heavy Nasdaq. The biotechnology sector lost about $50 billion in market capitalization in two days.

Although the working draft was announced in June 2000, it was not until February 2001 that Celera and the HGP scientists published details of their drafts. Special issues of Nature (which published the publicly funded project's scientific paper)[10] and Science (which published Celera's paper[11]) described the methods used to produce the draft sequence and offered analysis of the sequence. These drafts covered about 83% of the genome (90% of the euchromatic regions with 150,000 gaps and the order and orientation of many segments not yet established). In February 2001, at the time of the joint publications, press releases announced that the project had been completed by both groups. Improved drafts were announced in 2003 and 2005, filling in to ≈92% of the sequence currently.

The competition proved to be very good for the project, spurring the public groups to modify their strategy in order to accelerate progress. The rivals at UC Santa Cruz initially agreed to pool their data, but the agreement fell apart when Celera refused to deposit its data in the unrestricted public database GenBank. Celera had incorporated the public data into their genome, but forbade the public effort to use Celera data.

HGP is the most well known of many international genome projects aimed at sequencing the DNA of a specific organism. While the human DNA sequence offers the most tangible benefits, important developments in biology and medicine are predicted as a result of the sequencing of model organisms, including mice, fruit flies, zebrafish, yeast, nematodes, plants, and many microbial organisms and parasites.

In 2004, researchers from the International Human Genome Sequencing Consortium (IHGSC) of the HGP announced a new estimate of 20,000 to 25,000 genes in the human genome.[12] Previously 30,000 to 40,000 had been predicted, while estimates at the start of the project reached up to as high as 2,000,000. The number continues to fluctuate and it is now expected that it will take many years to agree on a precise value for the number of genes in the human genome.
History
In 1976, the genome of the RNA virus Bacteriophage MS2 was the first complete genome to be determined, by Walter Fiers and his team at the University of Ghent (Ghent, Belgium).[13] The idea for the shotgun technique came from the use of an algorithm that combined sequence information from many small fragments of DNA to reconstruct a genome. This technique was pioneered by Frederick Sanger to sequence the genome of the Phage Φ-X174, a virus (bacteriophage) that primarily infects bacteria that was the first fully sequenced genome (DNA-sequence) in 1977.[14] The technique was called shotgun sequencing because the genome was broken into millions of pieces as if it had been blasted with a shotgun. In order to scale up the method, both the sequencing and genome assembly had to be automated, as they were in the 1980s.

Those techniques were shown applicable to sequencing of the first free-living bacterial genome (1.8 million base pairs) of Haemophilus influenzae in 1995 [15] and the first animal genome (~100 Mbp) [16] It involved the use of automated sequencers, longer individual sequences using approximately 500 base pairs at that time. Paired sequences separated by a fixed distance of around 2000 base pairs which were critical elements enabling the development of the first genome assembly programs for reconstruction of large regions of genomes (aka 'contigs').

Three years later, in 1998, the announcement by the newly-formed Celera Genomics that it would scale up the pairwise end sequencing method to the human genome was greeted with skepticism in some circles. The shotgun technique breaks the DNA into fragments of various sizes, ranging from 2,000 to 300,000 base pairs in length, forming what is called a DNA "library". Using an automated DNA sequencer the DNA is read in 800bp lengths from both ends of each fragment. Using a complex genome assembly algorithm and a supercomputer, the pieces are combined and the genome can be reconstructed from the millions of short, 800 base pair fragments. The success of both the public and privately funded effort hinged upon a new, more highly automated capillary DNA sequencing machine, called the Applied Biosystems 3700, that ran the DNA sequences through an extremely fine capillary tube rather than a flat gel. Even more critical was the development of a new, larger-scale genome assembly program, which could handle the 30–50 million sequences that would be required to sequence the entire human genome with this method. At the time, such a program did not exist. One of the first major projects at Celera Genomics was the development of this assembler, which was written in parallel with the construction of a large, highly automated genome sequencing factory. Development of the assembler was led by Brian Ramos. The first version of this assembler was demonstrated in 2000, when the Celera team joined forces with Professor Gerald Rubin to sequence the fruit fly Drosophila melanogaster using the whole-genome shotgun method[17]. At 130 million base pairs, it was at least 10 times larger than any genome previously shotgun assembled. One year later, the Celera team published their assembly of the three billion base pair human genome.

The Human Genome Project was a 13 year old mega project, that was launched in the year 1990 and completed in 2003. This project is closely associated to the branch of biology called Bio-informatics. The human genome project international consortium announced the publication of a draft sequence and analysis of the human genome—the genetic blueprint for the human being. An American company—Celera, led by Craig Venter and the other huge international collaboration of distinguished scientists led by Francis Collins, director, National Human Genome Research Institute, U.S., both published their findings.

This Mega Project is co-ordinated by the U.S. Department of Energy and the National Institute of Health. During the early years of the project, the Wellcome Trust (U.K.) became a major partner, other countries like Japan, Germany, China and France contributed significantly. Already the atlas has revealed some starting facts. The two factors that made this project a success are:

1. Genetic Engineering Techniques, with which it is possible to isolate and clone any segment of DNA.
2. Availability of simple and fast technologies, to determining the DNA sequences.

Being the most complex organisms, human beings were expected to have more than 100,000 genes or combination of DNA that provides commands for every characteristics of the body. Instead their studies show that humans have only 30,000 genes – around the same as mice, three times as many as flies, and only five times more than bacteria. Scientist told that not only are the numbers similar, the genes themselves, baring a few, are alike in mice and men. In a companion volume to the Book of Life, scientists have created a catalogue of 1.4 million single-letter differences, or single-nucleotide polymorphisms (SNPs) – and specified their exact locations in the human genome. This SNP map, the world's largest publicly available catalogue of SNP's, promises to revolutionize both mapping diseases and tracing human history. The sequence information from the consortium has been immediately and freely released to the world, with no restrictions on its use or redistribution. The information is scanned daily by scientists in academia and industry, as well as commercial database companies, providing key information services to bio-technologists. Already, many genes have been identified from the genome sequence, including more than 30 that play a direct role in human diseases. By dating the three millions repeat elements and examining the pattern of interspersed repeats on the Y-chromosome, scientists estimated the relative mutation rates in the X and the Y chromosomes and in the male and the female germ lines. They found that the ratio of mutations in male Vs female is 2:1. Scientists point to several possible reasons for the higher mutation rate in the male germ line, including the fact that there are a greater number of cell divisions involved in the formation of sperm than in the formation of eggs.
Methods

The IHGSC used pair-end sequencing plus whole-genome shotgun mapping of large (≈100 Kbp) plasmid clones and shotgun sequencing of smaller plasmid sub-clones plus a variety of other mapping data to orient and check the assembly of each human chromosome[10].

The Celera group emphasized the importance of the “whole-genome shotgun” sequencing method, relying on sequence information to orient and locate their fragments within the chromosome. However they used the publicly available data from HGP to assist in the assembly and orientation process, raising concerns that the Celera sequence was not independently derived.
Genome donors

In the IHGSC international public-sector Human Genome Project (HGP), researchers collected blood (female) or sperm (male) samples from a large number of donors. Only a few of many collected samples were processed as DNA resources. Thus the donor identities were protected so neither donors nor scientists could know whose DNA was sequenced. DNA clones from many different libraries were used in the overall project, with most of those libraries being created by Dr. Pieter J. de Jong. It has been informally reported, and is well known in the genomics community, that much of the DNA for the public HGP came from a single anonymous male donor from Buffalo, New York (code name RP11).[20]

HGP scientists used white blood cells from the blood of two male and two female donors (randomly selected from 20 of each) -- each donor yielding a separate DNA library. One of these libraries (RP11) was used considerably more than others, due to quality considerations. One minor technical issue is that male samples contain just over half as much DNA from the sex chromosomes (one X chromosome and one Y chromosome) compared to female samples (which contain two X chromosomes). The other 22 chromosomes (the autosomes) are the same for both genders.

Although the main sequencing phase of the HGP has been completed, studies of DNA variation continue in the International HapMap Project, whose goal is to identify patterns of single-nucleotide polymorphism (SNP) groups (called haplotypes, or “haps”). The DNA samples for the HapMap came from a total of 270 individuals: Yoruba people in Ibadan, Nigeria; Japanese people in Tokyo; Han Chinese in Beijing; and the French Centre d’Etude du Polymorphisms Humain (CEf) resource, which consisted of residents of the United States having ancestry from Western and Northern Europe.

In the Celera Genomics private-sector project, DNA from five different individuals were used for sequencing. The lead scientist of Celera Genomics at that time, Craig Venter, later acknowledged (in a public letter to the journal Science) that his DNA was one of 21 samples in the pool, five of which were selected for use[21][22].

On September 4, 2007, a team led by Craig Venter published his complete DNA sequence[23], unveiling the six-billion-nucleotide genome of a single individual for the first time.
Benefits

The work on interpretation of genome data is still in its initial stages. It is anticipated that detailed knowledge of the human genome will provide new avenues for advances in medicine and biotechnology. Clear practical results of the project emerged even before the work was finished. For example, a number of companies, such as Myriad Genetics started offering easy ways to administer genetic tests that can show predisposition to a variety of illnesses, including breast cancer, disorders of hemostasis, cystic fibrosis, liver diseases and many others. Also, the etiologies for cancers, Alzheimer's disease and other areas of clinical interest are considered likely to benefit from genome information and possibly may lead in the long term to significant advances in their management.

There are also many tangible benefits for biological scientists. For example, a researcher investigating a certain form of cancer may have narrowed down his/her search to a particular gene. By visiting the human genome database on the World Wide Web, this researcher can examine what other scientists have written about this gene, including (potentially) the three-dimensional structure of its product, its function(s), its evolutionary relationships to other human genes, or to genes in mice or yeast or fruit flies, possible detrimental mutations, interactions with other genes, body tissues in which this gene is activated, diseases associated with this gene or other datatypes.

Further, deeper understanding of the disease processes at the level of molecular biology may determine new therapeutic procedures. Given the established importance of DNA in molecular biology and its central role in determining the fundamental operation of cellular processes, it is likely that expanded knowledge in this area will facilitate medical advances in numerous areas of clinical interest that may not have been possible without them.

The analysis of similarities between DNA sequences from different organisms is also opening new avenues in the study of evolution. In many cases, evolutionary questions can now be framed in terms of molecular biology; indeed, many major evolutionary milestones (the emergence of the ribosome and organelles, the development of embryos with body plans, the vertebrate immune system) can be related to the molecular level. Many questions about the similarities and differences between humans and our closest relatives (the primates, and indeed the other mammals) are expected to be illuminated by the data from this project.

The Human Genome Diversity Project (HGDP), spinoff research aimed at mapping the DNA that varies between human ethnic groups, which was rumored to have been halted, actually did continue and to date has yielded new conclusions.[citation needed] In the future, HGDP could possibly expose new data in disease surveillance, human development and anthropology. HGDP could unlock secrets behind and create new strategies for managing the vulnerability of ethnic groups to certain diseases (see race in biomedicine). It could also show how human populations have adapted to these vulnerabilities.

Advantages of Human Genome Project:

1. Knowledge of the effects of variation of DNA among individuals can revolutionize the ways to diagnose, treat and even prevent a number of diseases that affects the human beings.
2. It provides clues to the understanding of human biology.

Criticisms

For biologists, the genome has yielded one insightful surprise after another. But the primary goal of the Human Genome Project — to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments — has been largely elusive[24].

One sign of the genome’s limited use for medicine so far was a recent test of genetic predictions for heart disease. A medical team from Brigham and Women’s Hospital in Boston collected 101 genetic variants that had been statistically linked to heart disease in various genome-scanning studies. But the variants turned out to have no value in forecasting disease among 19,000 women who had been followed for 12 years. The old-fashioned method of taking a family history was a better guide.[25]

The pharmaceutical industry has spent billions of dollars to reap genomic secrets and is starting to bring several genome-guided drugs to market. While drug companies continue to pour huge amounts of money into genome research, it has become clear that the genetics of most diseases are more complex than anticipated and that it will take many more years before new treatments may be able to transform medicine.

The last decade has brought a flood of discoveries of disease-causing mutations in the human genome. But with most diseases, the findings have explained only a small part of the risk of getting the disease. And many of the genetic variants linked to diseases, some scientists[who?] have begun to fear, could be statistical illusions.

Using the HapMap catalog of genetic variations, studies were conducted to see if any of the variants were more common in the patients with a given disease than in healthy people. These studies required large numbers of patients and cost several million dollars apiece. Nearly 400 of them had been completed by 2009. These studies revealed that although hundreds of common genetic variants have been statistically linked with various diseases, with most diseases, the common variants have turned out to explain just a fraction of the genetic risk. It now seems more likely that each common disease is mostly caused by large numbers of rare variants, ones too rare to have been cataloged by the HapMap.

Defenders of the HapMap and genome-wide association studies say that the approach made sense because it is only now becoming cheap enough to look for rare variants, and that many common variants do have roles in diseases.

As of June 2010, some 850 sites on the genome, most of them near genes, have been implicated in common diseases. But most of the sites linked with diseases are not in genes — the stretches of DNA that tell the cell to make proteins — and have no known biological function, leading some geneticists[who?] to suspect that the associations are spurious.

Many of them may stem from factors other than a true association with disease risk[26]. The new switch among geneticists to seeing rare variants as the major cause of common disease is a major paradigm shift in human genetics.
Ethical, legal and social issues

The project's goals included not only identifying all of the approximately 24,000 genes in the human genome, but also to address the ethical, legal, and social issues (ELSI) that might arise from the availability of genetic information. Five percent of the annual budget was allocated to address the ELSI arising from the project.

Debra Harry, Executive Director of the U.S group Indigenous Peoples Council on Biocolonialism (IPCB), says that despite a decade of ELSI funding, the burden of genetics education has fallen on the tribes themselves to understand the motives of Human genome project and its potential impacts on their lives. Meanwhile, the government has been busily funding projects studying indigenous groups without any meaningful consultation with the groups. (See Biopiracy.)[27]

The main criticism of ELSI is the failure to address the conditions raised by population-based research, especially with regard to unique processes for group decision-making and cultural worldviews. Genetic variation research such as HGP is group population research, but most ethical guidelines, according to Harry, focus on individual rights instead of group rights. She says the research represents a clash of culture: indigenous people's life revolves around collectivity and group decision making whereas the Western culture promotes individuality. Harry suggests that one of the challenges of ethical research is to include respect for collective review and decision making, while also upholding the Western model of individual rights.

Nuclear physics

Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power and nuclear weapons, but the research has provided wider applications, including those in medicine (nuclear medicine, magnetic resonance imaging), materials engineering (ion implantation) and archaeology (radiocarbon dating).

The field of particle physics evolved out of nuclear physics and, for this reason, has been included under the same term in earlier times.
History

The discovery of the electron by J. J. Thomson was the first indication that the atom had internal structure. At the turn of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a large positively charged ball with small negatively charged electrons embedded inside of it. By the turn of the century physicists had also discovered three types of radiation coming from atoms, which they named alpha, beta, and gamma radiation. Experiments in 1911 by Lise Meitner and Otto Hahn, and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energies that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it indicated that energy was not conserved in these decays.

In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel, Pierre and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.
Rutherford's team discovers the nucleus

In 1907 Ernest Rutherford published "Radiation of the α Particle from Radium in passing through Matter"Geiger expanded on this work in a communication to the Royal Society with experiments he and Rutherford had done passing α particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Marsden and further greatly expanded work was published in 1910 by Geiger,[4] In 1911-2 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.

The key experiment behind this announcement happened in 1910 as Ernest Rutherford's team performed a remarkable experiment in which Hans Geiger and Ernest Marsden under his supervision fired alpha particles (helium nuclei) at a thin film of gold foil. The plum pudding model predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. Rutherford had the idea to instruct his team to look for something that shocked him to actually observe: a few particles were scattered through large angles, even completely backwards, in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, beginning with Rutherford's analysis of the data in 1911, eventually led to the Rutherford model of the atom, in which the atom has a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles), and the nucleus was surrounded by 7 more orbiting electrons.

The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons had a spin of 1/2, and in the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 1/2. Rasetti discovered, however, that nitrogen-14 has a spin of 1.
James Chadwick discovers the neutron

In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert L. Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion about the need for such a particle, by Rutherford). In the same year Dmitri Ivanenko suggested that neutrons were in fact spin 1/2 particles and that the nucleus contained neutrons to explain the mass not due to protons, and that there were no electrons in the nucleus—only protons and neutrons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model, each contribute a spin of 1/2 in the same direction, for a final total spin of 1.

With the discovery of the neutron, scientists at last could calculate what fraction of binding energy each nucleus had, from comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way and—when nuclear reactions were measured—were found to agree with Einstein's calculation of the equivalence of mass and energy to high accuracy (within 1% as of in 1934).
Yukawa's meson postulated to bind nuclei

In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.

With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high energy photons (gamma decay).

The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics which unifies the strong, weak, and electromagnetic forces.
Modern nuclear physics
A heavy nucleus can contain hundreds of nucleons which means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy which arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.

Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert-Mayer. Nuclei with certain numbers of neutrons and protons (the magic numbers 2, 8, 20, 50, 82, 126, ...) are particularly stable, because their shells are filled.

Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons, analogously to Cooper pairs of electrons.

Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark-gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
Modern topics in nuclear physics
There are 80 elements which have at least one stable isotope (defined as isotopes never observed to decay), and in total there are about 256 such stable isotopes. However, there are thousands more well-characterized isotopes which are unstable. These radioisotopes may be unstable and decay in all timescales ranging from fractions of a second to weeks, years, or many billions of years.

For example, if a nucleus has too few or too many neutrons it may be unstable, and will decay after some period of time. For example, in a process called beta decay a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is turned into a proton and an electron and antineutrino, by the weak nuclear force. The element is transmuted to another element in the process, because while it previously had seven protons (which makes it nitrogen) it now has eight (which makes it oxygen).

In alpha decay the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays, until a stable element is formed.

In gamma decay, a nucleus decays from an excited state into a lower state by emitting a gamma ray. It is then stable. The element is not changed in the process.

Other more exotic decays are possible (see the main article). For example, in internal conversion decay, the energy from an excited nucleus may be used to eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons, but is not beta decay, and (unlike beta decay) does not transmute one element to another.
Nuclear fusion
When two low mass nuclei come into very close contact with each other it is possible for the strong force to fuse the two together. It takes a great deal of energy to push the nuclei close enough together for the strong or nuclear forces to have an effect, so the process of nuclear fusion can only take place at very high temperatures or high densities. Once the nuclei are close enough together the strong force overcomes their electromagnetic repulsion and squishes them into a new nucleus. A very large amount of energy is released when light nuclei fuse together because the binding energy per nucleon increases with mass number up until nickel-62. Stars like our sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. Research to find an economically viable method of using energy from a controlled fusion reaction is currently being undertaken by various research establishments (see JET and ITER).
Nuclear fission
For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones. This splitting of atoms is known as nuclear fission.

The process of alpha decay may be thought of as a special type of spontaneous nuclear fission. This process produces a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.

For certain of the heaviest nuclei which produce neutrons on fission, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a so-called chain reaction. (Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions.) The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission type nuclear bombs such as the two that the United States used against Hiroshima and Nagasaki at the end of World War II. Heavy nuclei such as uranium and thorium may undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.

For a neutron-initiated chain-reaction to occur, there must be a critical mass of the element present in a certain space under certain conditions (these conditions slow and conserve neutrons for the reactions). There is one known example of a natural nuclear fission reactor, which was active in two regions of Oklo, Gabon, Africa, over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain-reactions.
Production of heavy elements

According to the theory, as the Universe cooled after the big bang it eventually became possible for particles as we know them to exist. The most common particles created in the big bang which are still easily observable to us today were protons (hydrogen) and electrons (in equal numbers). Some heavier elements were created as the protons collided with each other, but most of the heavy elements we see today were created inside of stars during a series of fusion stages, such as the proton-proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star. Since the binding energy per nucleon peaks around iron, energy is only released in fusion processes occurring below this point. Since the creation of heavier nuclei by fusion costs energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s process) or by the rapid, or r process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r process is thought to occur in supernova explosions because the conditions of high temperature, high neutron flux and ejected matter are present. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers). The r process duration is typically in the range of a few seconds.
References

1. ^ Philosophical Magazine (12, p 134-46)
2. ^ Proc. Roy. Soc. July 17, 1908
3. ^ Proc. Roy. Soc. A82 p 495-500
4. ^ Proc. Roy. Soc. Feb. 1, 1910

* Nuclear Physics by Irving Kaplan 2Nd edition, 1962 Addison-Wesley
* General Chemistry by Linus Pauling 1970 Dover Pub. ISBN 0-486-65622-5
* Introductory Nuclear Physics by Kenneth S. Krane Pub. Wiley
* Models of the Atomic Nucleus by N. Cook, Springer Verlag (2006), ISBN 3540285695

Nuclear power

"Nuclear energy" redirects here. For other uses, see Nuclear binding energy and Nuclear Energy (sculpture).
"Atomic Power" redirects here. For the film, see Atomic Power (film).
This article is semi-protected.
The Ikata Nuclear Power Plant, a pressurized water reactor that cools by secondary coolant exchange with the ocean.
The Susquehanna Steam Electric Station, a boiling water reactor. The reactors are located inside the rectangular containment buildings towards the front of the cooling towers.
Three nuclear powered ships, (top to bottom) nuclear cruisers USS Bainbridge and USS Long Beach with USS Enterprise the first nuclear powered aircraft carrier in 1964. Crew members are spelling out Einstein's mass-energy equivalence formula E=mc² on the flight deck.

Nuclear power is produced by controlled (i.e., non-explosive) nuclear reactions. Commercial and utility plants currently use nuclear fission reactions to heat water to produce steam, which is then used to generate electricity.

In 2009, 13-14% of the world's electricity came from nuclear power.[1] Also, more than 150 naval vessels using nuclear propulsion have been built.
Use
As of 2005, nuclear power provided 6.3% of the world's energy and 15% of the world's electricity, with the U.S., France, and Japan together accounting for 56.5% of nuclear generated electricity.[2] In 2007, the IAEA reported there were 439 nuclear power reactors in operation in the world,[3] operating in 31 countries.[4] As of December 2009, the world had 436 reactors.[5] Since commercial nuclear energy began in the mid-1950s, 2008 was the first year that no new nuclear power plant was connected to the grid, although two were connected in 2009.[5][6]

Annual generation of nuclear power has been on a slight downward trend since 2007, decreasing 1.8% in 2009 to 2558 TWh with nuclear power meeting 13-14% of the world's electricity demand.[1] One factor in the nuclear power percentage decrease since 2007 has been the prolonged shutdown of large reactors at the Kashiwazaki-Kariwa Nuclear Power Plant in Japan following the Niigata-Chuetsu-Oki earthquake.[1]

The United States produces the most nuclear energy, with nuclear power providing 19%[7] of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors—80% as of 2006.[8] In the European Union as a whole, nuclear energy provides 30% of the electricity.[9] Nuclear energy policy differs between European Union countries, and some, such as Austria, Estonia, and Ireland, have no active nuclear power stations. In comparison, France has a large number of these plants, with 16 multi-unit stations in current use.

In the US, while the Coal and Gas Electricity industry is projected to be worth $85 billion by 2013, Nuclear Power generators are forecast to be worth $18 billion.[10]

Many military and some civilian (such as some icebreaker) ships use nuclear marine propulsion, a form of nuclear propulsion.[11] A few space vehicles have been launched using full-fledged nuclear reactors: the Soviet RORSAT series and the American SNAP-10A.

International research is continuing into safety improvements such as passively safe plants,[12] the use of nuclear fusion, and additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems.
Nuclear fusion

Nuclear fusion reactions are safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under intense theoretical and experimental investigation since the 1950s.
Use in space

Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. This is due to the much higher energy density of nuclear reactions: some 7 orders of magnitude (10,000,000 times) more energetic than the chemical reactions which power the current generation of rockets.

Radioactive decay has been used on a relatively small (few kW) scale, mostly to power space missions and experiments.
History
Origins
The pursuit of nuclear energy for electricity generation began soon after the discovery in the early 20th century that radioactive elements, such as radium, released immense amounts of energy, according to the principle of mass–energy equivalence. However, means of harnessing such energy was impractical, because intensely radioactive elements were, by their very nature, short-lived (high energy release is correlated with short half-lives). However, the dream of harnessing "atomic energy" was quite strong, even it was dismissed by such fathers of nuclear physics like Ernest Rutherford as "moonshine." This situation, however, changed in the late 1930s, with the discovery of nuclear fission.

In 1932, James Chadwick discovered the neutron, which was immediately recognized as a potential tool for nuclear experimentation because of its lack of an electric charge. Experimentation with bombardment of materials with neutrons led Frédéric and Irène Joliot-Curie to discover induced radioactivity in 1934, which allowed the creation of radium-like elements at much less the price of natural radium. Further work by Enrico Fermi in the 1930s focused on using slow neutrons to increase the effectiveness of induced radioactivity. Experiments bombarding uranium with neutrons led Fermi to believe he had created a new, transuranic element, which he dubbed Hesperium.

But in 1938, German chemists Otto Hahn[13] and Fritz Strassmann, along with Austrian physicist Lise Meitner[14] and Meitner's nephew, Otto Robert Frisch,[15] conducted experiments with the products of neutron-bombarded uranium, as a means of further investigating Fermi's claims. They determined that the relatively tiny neutron split the nucleus of the massive uranium atoms into two roughly equal pieces, contradicting Fermi. This was an extremely surprising result: all other forms of nuclear decay involved only small changes to the mass of the nucleus, whereas this process—dubbed "fission" as a reference to biology—involved a complete rupture of the nucleus. Numerous scientists, including Leo Szilard, who was one of the first, recognized that if fission reactions released additional neutrons, a self-sustaining nuclear chain reaction could result. Once this was experimentally confirmed and announced by Frédéric Joliot-Curie in 1939, scientists in many countries (including the United States, the United Kingdom, France, Germany, and the Soviet Union) petitioned their governments for support of nuclear fission research, just on the cusp of World War II.
In the United States, where Fermi and Szilard had both emigrated, this led to the creation of the first man-made reactor, known as Chicago Pile-1, which achieved criticality on December 2, 1942. This work became part of the Manhattan Project, which built large reactors at the Hanford Site (formerly the town of Hanford, Washington) to breed plutonium for use in the first nuclear weapons, which were used on the cities of Hiroshima and Nagasaki. A parallel uranium enrichment effort also was pursued.

After World War II, the prospects of using "atomic energy" for good, rather than simply for war, were greatly advocated as a reason not to keep all nuclear research controlled by military organizations. However, most scientists agreed that civilian nuclear power would take at least a decade to master, and the fact that nuclear reactors also produced weapons-usable plutonium created a situation in which most national governments (such as those in the United States, the United Kingdom, Canada, and the USSR) attempted to keep reactor research under strict government control and classification. In the United States, reactor research was conducted by the U.S. Atomic Energy Commission, primarily at Oak Ridge, Tennessee, Hanford Site, and Argonne National Laboratory.

Work in the United States, United Kingdom, Canada, and USSR proceeded over the course of the late 1940s and early 1950s. Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100 kW. Work was also strongly researched in the US on nuclear marine propulsion, with a test reactor being developed by 1953. (Eventually, the USS Nautilus, the first nuclear-powered submarine, would launch in 1955.) In 1953, US President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the 1954 Amendments to the Atomic Energy Act which allowed rapid declassification of U.S. reactor technology and encouraged development by the private sector.
Early years
Calder Hall nuclear power station in the United Kingdom was the world's first nuclear power station to produce electricity in commercial quantities.[16]
The Shippingport Atomic Power Station in Shippingport, Pennsylvania was the first commercial reactor in the USA and was opened in 1957.

On June 27, 1954, the USSR's Obninsk Nuclear Power Plant became the world's first nuclear power plant to generate electricity for a power grid, and produced around 5 megawatts of electric power.Later in 1954, Lewis Strauss, then chairman of the United States Atomic Energy Commission (U.S. AEC, forerunner of the U.S. Nuclear Regulatory Commission and the United States Department of Energy) spoke of electricity in the future being "too cheap to meter."[19] Strauss was referring to hydrogen fusion[20][21]- which was secretly being developed as part of Project Sherwood at the time - but Strauss's statement was interpreted as a promise of very cheap energy from nuclear fission. The U.S. AEC itself had issued far more conservative testimony regarding nuclear fission to the U.S. Congress only months before, projecting that "costs can be brought down... [to]... about the same as the cost of electricity from conventional sources..." Significant disappointment would develop later on, when the new nuclear plants did not provide energy "too cheap to meter."

In 1955 the United Nations' "First Geneva Conference", then the world's largest gathering of scientists and engineers, met to explore the technology. In 1957 EURATOM was launched alongside the European Economic Community (the latter is now the European Union). The same year also saw the launch of the International Atomic Energy Agency (IAEA).

The world's first commercial nuclear power station, Calder Hall in Sellafield, England was opened in 1956 with an initial capacity of 50 MW (later 200 MW)The first commercial nuclear generator to become operational in the United States was the Shippingport Reactor (Pennsylvania, December 1957).

One of the first organizations to develop nuclear power was the U.S. Navy, for the purpose of propelling submarines and aircraft carriers. It has an unblemished record in nuclear safety,[citation needed] perhaps because of the stringent demands of Admiral Hyman G. Rickover, who was the driving force behind nuclear marine propulsion as well as the Shippingport Reactor (Alvin Radkowsky was chief scientist at the U.S. Navy nuclear propulsion division, and was involved with the latter). The U.S. Navy has operated more nuclear reactors than any other entity, including the Soviet Navy,[citation needed][dubious – discuss] with no publicly known major incidents. The first nuclear-powered submarine, USS Nautilus (SSN-571)), was put to sea in December 1954.[23] Two U.S. nuclear submarines, USS Scorpion and USS Thresher, have been lost at sea. These vessels were both lost due to malfunctions in systems not related to the reactor plants.[citation needed] The sites are monitored and no known leakage has occurred from the onboard reactors. The United States Army also had a nuclear power program, beginning in 1954. The SM-1 Nuclear Power Plant, at Ft. Belvoir, Virginia, was the first power reactor in the US to supply electrical energy to a commercial grid (VEPCO), in April 1957, before Shippingport.
Development
Installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100 GW in the late 1970s, and 300 GW in the late 1980s. Since the late 1980s worldwide capacity has risen much more slowly, reaching 366 GW in 2005. Between around 1970 and 1990, more than 50 GW of capacity was under construction (peaking at over 150 GW in the late 70s and early 80s) — in 2005, around 25 GW of new capacity was planned. More than two-thirds of all nuclear plants ordered after January 1970 were eventually cancelled.[23] A total of 63 nuclear units were canceled in the USA between 1975 and 1980.
During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation)and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s (U.S.) and 1990s (Europe), flat load growth and electricity liberalization also made the addition of large new baseload capacity unattractive.

The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation (39% and 73% respectively) to invest in nuclear power.[26][27] Today, nuclear power supplies about 80% and 30% of the electricity in those countries, respectively.
A general movement against nuclear power arose during the last third of the 20th century, based on the fear of a possible nuclear accident as well as the history of accidents, fears of radiation as well as the history of radiation of the public, nuclear proliferation, and on the opposition to nuclear waste production, transport and lack of any final storage plans. Protest movements against nuclear power first emerged in the USA in the late 1970s and spread quickly to Europe and the rest of the world. Anti-nuclear power groups emerged in every country that has had a nuclear power programme. Some of these anti-nuclear power organisations are reported to have developed considerable expertise on nuclear power and energy issues.In 1992, the chairman of the Nuclear Regulatory Commission said that "his agency had been pushed in the right direction on safety issues because of the pleas and protests of nuclear watchdog groups".Health and safety concerns, the 1979 accident at Three Mile Island, and the 1986 Chernobyl disaster played a part in stopping new plant construction in many countries,[30][31] although the public policy organization Brookings Institution suggests that new nuclear units have not been ordered in the U.S. because of soft demand for electricity, and cost overruns on nuclear plants due to regulatory issues and construction delays.[32]

Unlike the Three Mile Island accident, the much more serious Chernobyl accident did not increase regulations affecting Western reactors since the Chernobyl reactors were of the problematic RBMK design only used in the Soviet Union, for example lacking "robust" containment buildings. Many of these reactors are still in use today. However, changes were made in both the reactors themselves (use of low enriched uranium) and in the control system (prevention of disabling safety systems) to reduce the possibility of a duplicate accident.

An international organization to promote safety awareness and professional development on operators in nuclear facilities was created: WANO; World Association of Nuclear Operators.

Opposition in Ireland and Poland prevented nuclear programs there, while Austria (1978), Sweden (1980) and Italy (1987) (influenced by Chernobyl) voted in referendums to oppose or phase out nuclear power. In July 2009, the Italian Parliament passed a law that canceled the results of an earlier referendum and allowed the immediate start of the Italian nuclear program.
Nuclear reactor technology
Just as many conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear power plants convert the energy released from the nucleus of an atom, typically via nuclear fission.

When a relatively large fissile atomic nucleus (usually uranium-235 or plutonium-239) absorbs a neutron, a fission of the atom often results. Fission splits the atom into two or more smaller nuclei with kinetic energy (known as fission products) and also releases gamma radiation and free neutrons.[35] A portion of these neutrons may later be absorbed by other fissile atoms and create more fissions, which release more neutrons, and so on.[36]

This nuclear chain reaction can be controlled by using neutron poisons and neutron moderators to change the portion of neutrons that will go on to cause more fissions.[36] Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if unsafe conditions are detected.[37]

A cooling system removes heat from the reactor core and transports it to another area of the plant, where the thermal energy can be harnessed to produce electricity or to do other useful work. Typically the hot coolant will be used as a heat source for a boiler, and the pressurized steam from that boiler will power one or more steam turbine driven electrical generators.[38]

There are many different reactor designs, utilizing different fuels and coolants and incorporating different control schemes. Some of these designs have been engineered to meet a specific need. Reactors for nuclear submarines and large naval ships, for example, commonly use highly enriched uranium as a fuel. This fuel choice increases the reactor's power density and extends the usable life of the nuclear fuel load, but is more expensive and a greater risk to nuclear proliferation than some of the other nuclear fuels.[39]

A number of new designs for nuclear power generation, collectively known as the Generation IV reactors, are the subject of active research and may be used for practical power generation in the future. Many of these new designs specifically attempt to make fission reactors cleaner, safer and/or less of a risk to the proliferation of nuclear weapons. Passively safe plants (such as the ESBWR) are available to be built[40] and other designs that are believed to be nearly fool-proof are being pursued.[41] Fusion reactors, which may be viable in the future, diminish or eliminate many of the risks associated with nuclear fission.[42]
Flexibility of nuclear power plants

It is often claimed that nuclear stations are inflexible in their output, implying that other forms of energy would be required to meet peak demand. While that is true for certain reactors, this is no longer true of at least some modern designs.[43]

Nuclear plants are routinely used in load following mode on a large scale in France.[44]

Boiling water reactors normally have load-following capability, implemented by varying the recirculation water flow.
Life cycle
A nuclear reactor is only part of the life-cycle for nuclear power. The process starts with mining (see Uranium mining). Uranium mines are underground, open-pit, or in-situ leach mines. In any case, the uranium ore is extracted, usually converted into a stable and compact form such as yellowcake, and then transported to a processing facility. Here, the yellowcake is converted to uranium hexafluoride, which is then enriched using various techniques. At this point, the enriched uranium, containing more than the natural 0.7% U-235, is used to make rods of the proper composition and geometry for the particular reactor that the fuel is destined for. The fuel rods will spend about 3 operational cycles (typically 6 years total now) inside the reactor, generally until about 3% of their uranium has been fissioned, then they will be moved to a spent fuel pool where the short lived isotopes generated by fission can decay away. After about 5 years in a cooling pond, the spent fuel is radioactively and thermally cool enough to handle, and it can be moved to dry storage casks or reprocessed.
Conventional fuel resources
Uranium is a fairly common element in the Earth's crust. Uranium is approximately as common as tin or germanium in Earth's crust, and is about 35 times more common than silver. Uranium is a constituent of most rocks, dirt, and of the oceans. The fact that uranium is so spread out is a problem because mining uranium is only economically feasible where there is a large concentration. Still, the world's present measured resources of uranium, economically recoverable at a price of 130 USD/kg, are enough to last for "at least a century" at current consumption rates.[45][46] This represents a higher level of assured resources than is normal for most minerals. On the basis of analogies with other metallic minerals, a doubling of price from present levels could be expected to create about a tenfold increase in measured resources, over time. However, the cost of nuclear power lies for the most part in the construction of the power station. Therefore the fuel's contribution to the overall cost of the electricity produced is relatively small, so even a large fuel price escalation will have relatively little effect on final price. For instance, typically a doubling of the uranium market price would increase the fuel cost for a light water reactor by 26% and the electricity cost about 7%, whereas doubling the price of natural gas would typically add 70% to the price of electricity from that source. At high enough prices, eventually extraction from sources such as granite and seawater become economically feasible.[47][48]

Current light water reactors make relatively inefficient use of nuclear fuel, fissioning only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable and more efficient reactor designs allow better use of the available resources.[49]
Breeding
As opposed to current light water reactors which use uranium-235 (0.7% of all natural uranium), fast breeder reactors use uranium-238 (99.3% of all natural uranium). It has been estimated that there is up to five billion years’ worth of uranium-238 for use in these power plants.[50]

Breeder technology has been used in several reactors, but the high cost of reprocessing fuel safely requires uranium prices of more than 200 USD/kg before becoming justified economically.[51] As of December 2005, the only breeder reactor producing power is BN-600 in Beloyarsk, Russia. The electricity output of BN-600 is 600 MW — Russia has planned to build another unit, BN-800, at Beloyarsk nuclear power plant. Also, Japan's Monju reactor is planned for restart (having been shut down since 1995), and both China and India intend to build breeder reactors.

Another alternative would be to use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times as common as uranium in the Earth's crust, and has different geographic characteristics. This would extend the total practical fissionable resource base by 450%.[52] Unlike the breeding of U-238 into plutonium, fast breeder reactors are not necessary — it can be performed satisfactorily in more conventional plants. India has looked into this technology, as it has abundant thorium reserves but little uranium.
Fusion

Fusion power advocates commonly propose the use of deuterium, or tritium, both isotopes of hydrogen, as fuel and in many current designs also lithium and boron. Assuming a fusion energy output equal to the current global output and that this does not increase in the future, then the known current lithium reserves would last 3000 years, lithium from sea water would last 60 million years, and a more complicated fusion process using only deuterium from sea water would have fuel for 150 billion years.[53] Although this process has yet to be realized, many experts and civilians alike believe fusion to be a promising future energy source due to the short lived radioactivity of the produced waste, its low carbon emissions, and its prospective power output.
Solid waste
For more details on this topic, see Radioactive waste.
See also: List of nuclear waste treatment technologies

The most important waste stream from nuclear power plants is spent nuclear fuel. It is primarily composed of unconverted uranium as well as significant quantities of transuranic actinides (plutonium and curium, mostly). In addition, about 3% of it is fission products from nuclear reactions. The actinides (uranium, plutonium, and curium) are responsible for the bulk of the long-term radioactivity, whereas the fission products are responsible for the bulk of the short-term radioactivity.[54]
High-level radioactive waste
After about 5 percent of a nuclear fuel rod has reacted inside a nuclear reactor that rod is no longer able to be used as fuel (due to the build-up of fission products). Today, scientists are experimenting on how to recycle these rods so as to reduce waste and use the remaining actinides as fuel (large-scale reprocessing is being used in a number of countries).

A typical 1000-MWe nuclear reactor produces approximately 20 cubic meters (about 27 tonnes) of spent nuclear fuel each year (but only 3 cubic meters of vitrified volume if reprocessed).[55][56] All the spent fuel produced to date by all commercial nuclear power plants in the US would cover a football field to the depth of about one meter.[57]

Spent nuclear fuel is initially very highly radioactive and so must be handled with great care and forethought. However, it becomes significantly less radioactive over the course of thousands of years of time. After 40 years, the radiation flux is 99.9% lower than it was the moment the spent fuel was removed from operation, although the spent fuel is still dangerously radioactive at that time.[49] After 10,000 years of radioactive decay, according to United States Environmental Protection Agency standards the spent nuclear fuel will no longer pose a threat to public health and safety.[citation needed]

When first extracted, spent fuel rods are stored in shielded basins of water (spent fuel pools), usually located on-site. The water provides both cooling for the still-decaying fission products, and shielding from the continuing radioactivity. After a period of time (generally five years for US plants), the now cooler, less radioactive fuel is typically moved to a dry-storage facility or dry cask storage, where the fuel is stored in steel and concrete containers. Most U.S. waste is currently stored at the nuclear site where it is generated, while suitable permanent disposal methods are discussed.

As of 2007, the United States had accumulated more than 50,000 metric tons of spent nuclear fuel from nuclear reactors.[58] Permanent storage underground in U.S. had been proposed at the Yucca Mountain nuclear waste repository, but that project has now been effectively cancelled - the permanent disposal of the U.S.'s high-level waste is an as-yet unresolved political problem.[59]

The amount of high-level waste can be reduced in several ways, particularly Nuclear reprocessing. Even so, the remaining waste will be substantially radioactive for at least 300 years even if the actinides are removed, and for up to thousands of years if the actinides are left in.[citation needed] Even with separation of all actinides, and using fast breeder reactors to destroy by transmutation some of the longer-lived non-actinides as well, the waste must be segregated from the environment for one to a few hundred years, and therefore this is properly categorized as a long-term problem. Subcritical reactors or fusion reactors could also reduce the time the waste has to be stored.[60] It has been argued[who?] that the best solution for the nuclear waste is above ground temporary storage since technology is rapidly changing. Some people believe that current waste might become a valuable resource in the future[citation needed].

According to a 2007 story broadcast on 60 Minutes, nuclear power gives France the cleanest air of any industrialized country, and the cheapest electricity in all of Europe.[61] France reprocesses its nuclear waste to reduce its mass and make more energy.[62] However, the article continues, "Today we stock containers of waste because currently scientists don't know how to reduce or eliminate the toxicity, but maybe in 100 years perhaps scientists will... Nuclear waste is an enormously difficult political problem which to date no country has solved. It is, in a sense, the Achilles heel of the nuclear industry... If France is unable to solve this issue, says Mandil, then 'I do not see how we can continue our nuclear program.'"[62] Further, reprocessing itself has its critics, such as the Union of Concerned Scientists.[63]
Low-level radioactive waste
The nuclear industry also produces a huge volume of low-level radioactive waste in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. In the United States, the Nuclear Regulatory Commission has repeatedly attempted to allow low-level materials to be handled as normal waste: landfilled, recycled into consumer items, et cetera.[citation needed] Most low-level waste releases very low levels of radioactivity and is only considered radioactive waste because of its history.[64]
Comparing radioactive waste to industrial toxic waste

In countries with nuclear power, radioactive wastes comprise less than 1% of total industrial toxic wastes, much of which remains hazardous indefinitely.[49] Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants are particularly noted for producing large amounts of toxic and mildly radioactive ash due to concentrating naturally occurring metals and mildly radioactive material from the coal. A recent report from Oak Ridge National Laboratory concludes that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent from radiation from coal plants is 100 times as much as from ideal operation of nuclear plants.[65] Indeed, coal ash is much less radioactive than nuclear waste, but ash is released directly into the environment, whereas nuclear plants use shielding to protect the environment from the irradiated reactor vessel, fuel rods, and any radioactive waste on site.[66]
Reprocessing
Reprocessing can potentially recover up to 95% of the remaining uranium and plutonium in spent nuclear fuel, putting it into new mixed oxide fuel. This produces a reduction in long term radioactivity within the remaining waste, since this is largely short-lived fission products, and reduces its volume by over 90%. Reprocessing of civilian fuel from power reactors is currently done on large scale in Britain, France and (formerly) Russia, soon will be done in China and perhaps India, and is being done on an expanding scale in Japan. The full potential of reprocessing has not been achieved because it requires breeder reactors, which are not yet commercially available. France is generally cited as the most successful reprocessor, but it presently only recycles 28% (by mass) of the yearly fuel use, 7% within France and another 21% in Russia.[67]

Unlike other countries, the US stopped civilian reprocessing from 1976 to 1981 as one part of US non-proliferation policy, since reprocessed material such as plutonium could be used in nuclear weapons: however, reprocessing is not allowed in the U.S.[68] In the U.S., spent nuclear fuel is currently all treated as waste.[69]

In February, 2006, a new U.S. initiative, the Global Nuclear Energy Partnership was announced. It is an international effort aimed to reprocess fuel in a manner making nuclear proliferation unfeasible, while making nuclear power available to developing countries.[70]
Depleted uranium
Uranium enrichment produces many tons of depleted uranium (DU) which consists of U-238 with most of the easily fissile U-235 isotope removed. U-238 is a tough metal with several commercial uses—for example, aircraft production, radiation shielding, and armor—as it has a higher density than lead. Depleted uranium is also useful in munitions as DU penetrators (bullets or APFSDS tips) "self sharpen", due to uranium's tendency to fracture along shear bands.[71][72]

There are concerns that U-238 may lead to health problems in groups exposed to this material excessively, such as tank crews and civilians living in areas where large quantities of DU ammunition have been used in shielding, bombs, missile warheads, and bullets. In January 2003 the World Health Organization released a report finding that contamination from DU munitions were localized to a few tens of meters from the impact sites and contamination of local vegetation and water was 'extremely low'. The report also states that approximately 70% of ingested DU will leave the body after twenty four hours and 90% after a few days.[73]
Economics
The economics of nuclear power plants are primarily influenced by the high initial investment necessary to construct a plant. In 2009, estimates for the cost of a new plant in the U.S. ranged from $6 to $10 billion. It is therefore usually more economical to run them as long as possible, or construct additional reactor blocks in existing facilities. In 2008, new nuclear power plant construction costs were rising faster than the costs of other types of power plants.[74][75]. A prestigious panel assembled for a 2003 MIT study of the industry found the following:

In deregulated markets, nuclear power is not now cost competitive with coal and natural gas. However, plausible reductions by industry in capital cost, operation and maintenance costs, and construction time could reduce the gap. Carbon emission credits, if enacted by government, can give nuclear power a cost advantage.
—The Future of Nuclear Power[76]

Comparative economics with other power sources are also discussed in the Main article above and in nuclear power debate.
Accidents and safety
Nine nuclear power plant accidents with more than US$300 million in property damage, to 2010[77][78][79] Date↓ Location↓ Description↓ Cost
(in millions
2006 $)↓
February 22, 1977 Jasłovske Bohunice, Czechoslovakia Severe corrosion of reactor and release of radioactivity into the plant area, necessitating total decommission US$1,700
March 28, 1979 Middletown, Pennsylvania, US Loss of coolant and partial core meltdown, see Three Mile Island accident and Three Mile Island accident health effects US$2,400
March 9, 1985 Athens, Alabama, US Instrumentation systems malfunction during startup, which led to suspension of operations at all three Browns Ferry Units US$1,830
April 11, 1986 Plymouth, Massachusetts, US Recurring equipment problems force emergency shutdown of Boston Edison's Pilgrim Nuclear Power Plant US$1,001
April 26, 1986 Kiev, Ukraine Steam explosion and meltdown with 4,056 deaths (see Chernobyl disaster) necessitating the evacuation of 300,000 people from Kiev and dispersing radioactive material across Europe (see Chernobyl disaster effects) US$6,700
March 31, 1987 Delta, Pennsylvania, US Peach Bottom units 2 and 3 shutdown due to cooling malfunctions and unexplained equipment problems US$400
November 24, 1989 Greifswald, East Germany Electrical error causes fire in the main trough that destroys control lines and five main coolant pumps UD$443
September 2, 1996 Crystal River, Florida, US Balance-of-plant equipment malfunction forces shutdown and extensive repairs at Crystal River Unit 3 US$384
February 1, 2010 Montpelier, Vermont, US Deteriorating underground pipes from the Vermont Yankee Nuclear Power Plant leak radioactive tritium into groundwater supplies US$10 [80]
Environmental effects of nuclear power
Comparisons of life-cycle greenhouse gas emissions
Main article: Comparisons of life-cycle greenhouse gas emissions

Comparisons of life cycle analysis (LCA) of carbon dioxide emissions show nuclear power as comparable to renewable energy sources.[82][83] A conclusion that is disputed by others studies.[84]
Debate on nuclear power
The nuclear power debate is about the controversy[85][86][87] which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes. The debate about nuclear power peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies", in some countries.[88][89]

Proponents of nuclear energy contend that nuclear power is a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on foreign oil.[90] Proponents claim that nuclear power produces virtually no conventional air pollution, such as greenhouse gases and smog, in contrast to the chief viable alternative of fossil fuel. Proponents also believe that nuclear power is the only viable course to achieve energy independence for most Western countries. Proponents claim that the risks of storing waste are small and can be further reduced by using the latest technology in newer reactors, and the operational safety record in the Western world is excellent when compared to the other major kinds of power plants.[91]

Opponents believe that nuclear power poses many threats to people and the environment[92][93][94]. These threats include the problems of processing, transport and storage of radioactive nuclear waste, the risk of nuclear weapons proliferation and terrorism, as well as health risks and environmental damage from uranium mining.[95][96] They also contend that reactors themselves are enormously complex machines where many things can and do go wrong, and there have been serious nuclear accidents.[78][97] Critics do not believe that the risks of using nuclear fission as a power source can be offset through the development of new technology. They also argue that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is not a low-carbon electricity source.[98][99][100]

Arguments of economics and safety are used by both sides of the debate.
Nuclear power organizations
Against
* Friends of the Earth International, a network of environmental organizations in 77 countries.[101]
* Greenpeace International, a non-governmental environmental organization[102] with offices in 41 countries.[103]
* Nuclear Information and Resource Service (International)
* Sortir du nucléaire (Canada)
* Sortir du nucléaire (France)
* Pembina Institute (Canada)
* Institute for Energy and Environmental Research (United States)

Supportive
* World Nuclear Association, a confederation of companies connected with nuclear power production. (International)
* International Atomic Energy Agency (IAEA)
* Nuclear Energy Institute (United States)
* American Nuclear Society (United States)
* United Kingdom Atomic Energy Authority (United Kingdom)
* EURATOM (Europe)
* Atomic Energy of Canada Limited (Canada)

Future of the industry
As of 2007, Watts Bar 1, which came on-line in February 7, 1996, was the last U.S. commercial nuclear reactor to go on-line. This is often quoted as evidence of a successful worldwide campaign for nuclear power phase-out. However, even in the U.S. and throughout Europe, investment in research and in the nuclear fuel cycle has continued, and some nuclear industry experts[104] predict electricity shortages, fossil fuel price increases, global warming and heavy metal emissions from fossil fuel use, new technology such as passively safe plants, and national energy security will renew the demand for nuclear power plants.

According to the World Nuclear Association, globally during the 1980s one new nuclear reactor started up every 17 days on average, and by the year 2015 this rate could increase to one every 5 days.[105]
Many countries remain active in developing nuclear power, including China, India, Japan and Pakistan. all actively developing both fast and thermal technology, South Korea and the United States, developing thermal technology only, and South Africa and China, developing versions of the Pebble Bed Modular Reactor (PBMR). Several EU member states actively pursue nuclear programs, while some other member states continue to have a ban for the nuclear energy use. Japan has an active nuclear construction program with new units brought on-line in 2005. In the U.S., three consortia responded in 2004 to the U.S. Department of Energy's solicitation under the Nuclear Power 2010 Program and were awarded matching funds—the Energy Policy Act of 2005 authorized loan guarantees for up to six new reactors, and authorized the Department of Energy to build a reactor based on the Generation IV Very-High-Temperature Reactor concept to produce both electricity and hydrogen. As of the early 21st century, nuclear power is of particular interest to both China and India to serve their rapidly growing economies—both are developing fast breeder reactors. (See also energy development). In the energy policy of the United Kingdom it is recognized that there is a likely future energy supply shortfall, which may have to be filled by either new nuclear plant construction or maintaining existing plants beyond their programmed lifetime.

There is a possible impediment to production of nuclear power plants as only a few companies worldwide have the capacity to forge single-piece reactor pressure vessels,[106] which are necessary in most reactor designs. Utilities across the world are submitting orders years in advance of any actual need for these vessels. Other manufacturers are examining various options, including making the component themselves, or finding ways to make a similar item using alternate methods.[107] Other solutions include using designs that do not require single-piece forged pressure vessels such as Canada's Advanced CANDU Reactors or Sodium-cooled Fast Reactors.

Some proponents of nuclear power as the key to this century’s coming energy crisis, such as the NINET, are proposing redesigning our nuclear power systems from the ground up. The nuclear power systems currently in use are adaptations of earlier designs, and as safety regulations and technologies have changed, their designs have become very complex and unwieldy. Designing new systems from scratch could make nuclear power less expensive to produce, safer, and more accessible.
China plans to build more than 100 plants,[108] while in the US the licenses of almost half its reactors have already been extended to 60 years,[109] and plans to build more than 30 new ones are under consideration. Further, the U.S. NRC and the U.S. Department of Energy have initiated research into Light water reactor sustainability which is hoped will lead to allowing extensions of reactor licenses beyond 60 years, in increments of 20 years, provided that safety can be maintained, as the loss in non-CO2-emitting generation capacity by retiring reactors "may serve to challenge U.S. energy security, potentially resulting in increased greenhouse gas emissions, and contributing to an imbalance between electric supply and demand."[111] In 2008, the International Atomic Energy Agency (IAEA) predicted that nuclear power capacity could double by 2030, though that would not be enough to increase nuclear's share of electricity generation

Laser

Laser is an abbreviation for Light Amplification by Stimulated Emission of Radiation means amplify light emitting stimulated emission) is a beam of photons involved in the frequency of waves and correspond to occur the phenomenon of interference between waves building to become a light-pulse high-energy. Can be likened to a laser beam pulse, where the military battalion move all the military steps compatible regular, while the normal light source radiates light waves scattered irregularly not have the force of the laser. Using crystals suitable materials (such as rubies) high purity can stimulate the production of the light beams of one color which is the length of one wave, as well as in the process of wavy one, and when they coincide with each other and their reflections several times between two women in the development of the laser (becomes Kalaskar in the battalion), Vtantzm band overlap and graduated from the device is powered large unwanted.
Modus operandi of the laser
This form shows the parts of the laser.

* (1) the center of the laser beam product.
* (2) electrical energy to stimulate the center on the version of light waves
* (3) of the light reflector (mirror) high performance.
* (4) lens out of the beam may be flat Ouadsp concave.
* (5) laser beam output.

The device works on the laser light reflection of a single color, ie, a wavelength and one in the rearview mirror (3) and the lens. This is done by stimulating the production of the center on the color of the light, a characteristic of the center. After the reflection beam of light in the center several times to reach equilibrium between the number of developing radiation accumulated in the center and which is characterized by a regular (link) X-ray multi-path reflection. Between the beam and abroad.

The specifications of the lens beam outside Ohmetin:

* Radius of curvature:

May be the internal flat surface of the lens or concave depending on the purpose desired. Enamels and the inner surface of the lens coating silver half reflector so that the laser beam out from the center to the outside. If there is a desire in the compilation of external beam and focused at the center of the outer surface of the concave lens. Also painted the exterior coating to prevent reflection, in order to allow exit of the laser beam output without waste.

* Reflection coefficient of the lens:

The number of reflections of light rays as the backlog within the center on the type of medium used. In laser helium - neon need to mirror a reflection of the degree by 99% to operate the device. As in the case of the nitrogen laser there is no need for internal reflection (a reflection of the degree 0%) where the nitrogen laser is characterized by high degree at the production of X-rays. On the other hand depends on the properties of the lens reflection of light on the wavelength of light. This gives the optical properties of the lens special attention when designing a device for laser.
Types of laser

* Gas laser (CO2 laser, Excimer LASER)
* Laser liquid ()
* Laser semiconductor (Semiconductor LASER)
* For solid state laser (Neodymium Yag Neodymium-YAG LASER)
Laser applications

Laser is currently used in various fields Castamalha in CDs in the manufacture of electronics and accurately measuring the distances - especially the dimensions of space objects - and in communications. Also used a laser in the treatment of some eye diseases where it is to shed a laser beam in the form of high-energy flashes at a certain point in the eye for a short time - less than a second -. And eye diseases that used the laser:

* Diabetic retinopathy.
* Holes in the retina.
* Vein thrombosis or occlusion of the website.
* Glaucoma (high eye pressure).
* Disadvantages of optical refraction in the eye (length or short-sightedness and astigmatism).
* Blockage of the tear ducts.
* Some tumors inside the eye.
* Cosmetic surgery around the eye.
* Cases of macular extinction.

Eliezer is also used in surgical procedures such as brain surgery, heart and blood vessels, and general surgery. In 1960 invented a device called the laser beams and a single color, direction and can be focused with a high degree mediated convex lens. There is also a lot of material capable of firing a laser beam, including frozen (Ruby and neodymium glass), and gas (helium, neon and xenon) semi-conductor material (arsenic, gallium and indium Antimony)
Industry

When it is stimulated laser-mediated electricity rising energy atoms from the lower level to the highest level, and re-reduction to the energy level of the minimum passing level East due to the unstable particles located in the path of energy then emit photons which gives the ring in the laser and graduated from the device great card arrived as far as I have reached 1700 million MW is interaction in the three to ten million seconds and compressed million and 50 000 kilograms to the square inch and the temperature between 100-200 thousand degrees. Scientists hope to use that method to achieve a nuclear fusion of light elements such as heavy hydrogen and tritium and lithium for the production of electric power.

* Used types of lasers Kalmosofp above, but working cards less, a temperature of between 1000 and 1800 degrees Celsius in the industry to cut steel plates, about one plate thickness of 3 cm. They have the advantage cut very precisely where the laser-mediated direct computer.

* It uses laser welding solids and active material with a degree fusion with high franchise accurately manufacturing due to the launch of an intense beam narrowly focused, it can also laser to open a hole diameter of 5 micrometers in 200 microseconds in the most material globe hardness (diamond and ruby red and titanium) and thanks to the Palace period of industrialization does not occur any change in the nature of the article.

* They also have an important use is another measure distances accurately, whether short or long distances. The laser can measure ten meters without causing an error than one ten thousandth of a meter. Also used laser beams to determine the distance of the moon from Earth. This was done in the Sbainbat where he put astronauts on the moon a mirror to reflect the laser when you fall out, and then the laser beam from the Earth to the Moon and Banekash on the mirror on the Moon and return to Earth scientists could calculate distance of the moon on the ground carefully did not reach them by .

* It is also used in setting goals, a very accurately, since the goal was at a distance of 20 km and we have a laser beam will be confined to a section optical beam in a circle diameter of 7 cm only. If released to the moon will be Qatar Chamber formed only 3,2 km.

* Taking place in America research massive use of laser-energy is very high for the destruction of enemy missiles high in the space before her arrival to America, and were able to achieve some success on this path, but research is still ongoing, first to master this new technology, then build a network of bone to detect hostile missiles while the launch, followed by direct lasers strong (or the laser weapon) on anti-missile to destroy it in space, this technology also includes the use of satellites and its role in this domain. The United States has a lot of money to make progress in this project.
Laser
What is a laser?

Laser = light amplification of the emission of radiation Catalyst

Laser is a mechanism for light emitting electromagnetic radiation within the region of the spectrum, through the process of stimulated emission .. Emitted laser light is (usually) and spatially coherent, narrow beam of low-contrast, which can be manipulated with lenses. In laser technology, "coherent light" refers to the light source which produces (emitted) in the light frequencies a similar step and stage. [1] and laser light beam is coherent sets it apart from light sources that emit beams of light is coherent, phase of the random variable with time and position, while the laser light is short wavelength electromagnetic spectrum light monochrome, however, there are laser weapons that emit of light and broad-spectrum, or at one time, when different wavelengths. Laser in the parent is a word indicating light amplification of the emissions catalyst of radiation, the light that is widespread evidence of electromagnetic radiation of any frequency, not only in the visible spectrum, and thus rays Allizrtan red, and laser-UV and rays X, and others. Because his predecessor, microwave, laser and maser, has been Achaehm first, devices that emit microwave and radio frequencies are denoted by "Mother", in the technical literature in the early, especially researchers at Bell Labs of the phone, and was also called laser optical maser, a term common at the present time, moreover, since 1998, the Bell Labs using a laser, the laser is not an accurate word sometimes used to describe non-laser light technology, for example, a coherent state is the source of an atom laser atom.

From left to right: gamma rays, X-ray, ultraviolet, visible spectrum, infrared waves, radio waves.
By Eliezer

Main components: -

Moderate gains *
* Laser pumping energy
* High Reflector
* Output coupler
* Laser beam

Building Eliezer

Laser consists of a gain medium inside the high-reflection optical cavity, as well as a means of providing energy for the Mediterranean. The medium is the financial gain with the properties that allow it to amplify light by stimulated emission .. In its simplest form, the cavity consists of two mirrors arranged such that light bounces back and forth, each time passing through the gain medium .. Usually one of the two mirrors coupler production is partially transparent, the laser beam emitted by the mirror. And the light of a specific wavelength which passes through the gain medium is amplified (increases in power), and mirrors around to ensure that most of the light makes many passes through the gain medium, is being amplified repeatedly. Part of the light that is between the mirrors (that is, within the cavity) passes through the partially transparent mirror and escapes as a beam of light. The supply of energy required to inflate the so-called pumping .. Energy is usually supplied as electricity or light at a wavelength different. May be provided by the light of the flash lamp or perhaps another laser. The most practical lasers contain elements which affect the additional properties, such as the wavelength of the light emitted and the form of Ray.
Laser Physics

The helium-neon laser demonstration in a laboratory at the University of Paris are the only rays of incandescent electricity produced in the middle of the empty light much the same way as in the neon light. Is the gain medium through which the laser passes, not the laser beam itself, which is clearly there. Crosses the laser beam in the air, a red dot that appears on the screen to the right.

Spectrum of helium neon laser shows a very high purity of the spectral fundamental to nearly all lasers. Emissivity compared with the relatively broad spectral light emitting diode. And medium for a laser themselves among the substances of purity, size, focus, and shape, which increases the beam by the stimulated emission. The piece of any state: gas, liquid, solid or plasma. Gain medium is absorbed pump energy, which raises some electrons to higher energy "excited". And molecules can interact with light both by absorbing photons or by emitting photons. Emissions can be spontaneous or stimulated. In the latter case, the photon is emitted in the same direction the light that passes by. When the number of molecules in one country exceeds the number of excited particles in some lower-energy state, population inversion is achieved and the amount of stimulated emission due to light passing through a greater amount of absorption .. Thus, the light is amplified. By itself, this makes the amplifier optical .. When an optical amplifier placed within an optical resonant cavity, one gets the laser light generated by stimulated emission is very similar to the input signal in terms of stage wavelength, and polarization. This gives the coherence of laser light characteristics, and allows it to maintain the uniform polarization and monochrome, in many cases developed by the design optical cavity. Optical cavity, a type of cavity resonator, contains a coherent beam of light between reflective surfaces so that light passes through the means of earning more than once before it is emitted from the output aperture or lost to diffraction or absorption. In the light circulates through the cavity, through the medium gain, if gain (amplification) in the Mediterranean which is stronger than the resonator losses, can not power over the popularization of rise exponentially .. But in each case stimulated emission returns the status of particle excited state of the earth, reducing the ability of the medium to gain more padding .. When this strong influence, the gain is said to be saturated .. The balance of power against the pump saturation gains and losses of the cavity result in a value for the balance of power inside the laser cavity, and this balance that determines the point at which the laser is working .. If the chosen pump power is very small, the gain is not sufficient to overcome the resonator losses, and the laser will be emitted power only a very small light .. minimum pumping energy required to start a laser is called the threshold. Medium gain will amplify any photons passing through it, regardless of the direction, but only for photons correspond to the cavity manage to pass more than once through the medium and to be inflated beam in the cavity and the output from the laser beam, and if they occur in free space instead of a waveguide (as is the case in the optical fiber laser), is, at best, and the low order Gaussian beams. lasers). If the beam is not a low-order Gaussian, and ways to cross the beam can not be described as an overlay Hermite - Gaussian or Laguerre - Gaussian beams (for stable laser cavity) .. Laser unstable resonators on the other hand, has been shown to produce fractal shaped beams. The packets may be highly collimated, which is in parallel without diverging .. However, can not fully collimated beam can be created, because of the deviation .. Collimated beam remain at a distance which varies with the square radius, and diverge in the end at an angle which varies inversely with the beam diameter. Thus, was created by a laser beam a small laboratory, such as laser helium neon spreading for a distance of 1.6 km (1 mile) diameter if shone from the Earth to the Moon 50 In comparison, the output of a typical semiconductor lasers, because of the small in diameter, departs almost as soon leave the slot, at an angle of anything up to 50 degrees. However, it can turn this package to be uneven in the collimated beam through the lens. In contrast, the light from sources other than the laser light can be collimated by optics as well. Although the phenomenon of laser was discovered with the assistance of quantum physics, it is not necessarily the most quantum mechanics from other light sources .. Could be that the process is the free electron laser can be explained without reference to quantum mechanics. Outlying areas of the laser may be continuous, constant production capacity (known as chemical weapons or continuous wave), or in the form of pulses, using the techniques of Q switch, or access to the switch. Spring in the process, there can be no higher than the peak forces should be some types of lasers, such as lasers, and laser dye solid state that can produce light over a broad range of wavelengths, and this property makes them highly suitable for generating pulses short of light, on the order m Low (10-15) s (laser dye solid-state).
Types and principles of operation of the laser

Waves of laser commercially available. Types of lasers described above gives the distinct lines of laser and wavelength. We recall the following types of lasers that emit light within the long wave, the technique used and the color and type of material the laser.
Laser Gas

. Many use tear gas to produce a laser beam, which is used in many purposes. . (HeNe) Helium Neon laser that is emitted in a variety of waves in the range 633 nm, which is common in education because of its cost low.
Laser carbon dioxide

Can emit several hundred kilowatts capacity at 9.6 micrometers and 10.6 micrometers, and is often used in the manufacture of cutting and welding. The efficiency of laser carbon dioxide more than 10%.
Argon-ion laser

Emit light in the wavelength range from 351 nm to - 528.7 nm. Depending on the optics and laser tube, and a different number of lines usable, but the fonts are the most common are 458 nm and 488 nm and 514.5 nm. And nitrogen accidental electrical discharge in gas at atmospheric pressure. Laser gas is cheap and UV due to the wavelength of 337.1 nm.

. Metal-ion lasers are gas lasers that generate waves of deep ultraviolet radiation. Helium - Silver (HeAg) 224 nm and neon - Copper (NeCu) 248 nm two examples. y. This laser is particularly narrow oscillation ls to less than 3 GHz, making them candidates for use.
Laser Chemical

. Laser chemical works by a chemical reaction, and can bring forces high in a continuous process, for example, in laser hydrogen fluoride (2700-2900 nm) and fluoride deuterium laser (3800 nm) in a reaction is a combination of hydrogen or deuterium gas with combustion products of ethylene in nitrogen trifluoride .. They were invented by George C. Pimentel.
Solid-state laser

Laser materials contained in the solid in the habit of "doping" where Chopp single crystal ion, which provides the necessary energy. For example, the first laser works Holeezzr Alrobin is made of sapphire crystal (chromium - aluminum oxide). Is also used in chrome or neodymium Kmmuzeet. Belongs to the category of laser fiber laser is also solid, as a means of effective and practical, and is used in the literature on the manufactures and their parts, also used in welding metals.
Semiconductor lasers

Is the type of laser solids, but in terms of customary laser "solid-state laser" exclude semiconductor name.

Neodymium is a common Tchoub in various single crystals, including the Ietereom (II: Evo 4), lithium fluoride Ietereom (second: YLF) and Ietereom aluminum garnet (II: De). All of these Almmuzeet can produce laser high-Valencia at the spectrum of infrared wave length of 1064 nm. They are used for metal cutting and welding and marking of metals and other materials, as well as in spectral analysis and re-pump dye laser.

Semiconductor laser is also commonly used at frequencies or wavelengths are different, Tsthedm to produce light 532 nm (green, visible), 355 nm UV and 266 nm (UV) when the light of these waves is required. Ytterbium, holmium, thulium, and erbium are other common solid state laser in scope 1020-1050 nm. Ytterbium is used in crystals such as Rob YP DVD: Rob Wye: Rob Wye, Rob Wye: Sys, Rob Wye: Boys, Rob Wye: CaF2, typically operating around 1020-1050 nm. They are very effective and can be powered high because of a defect a small quantity of very strong rise in very short pulses can not be achieved with Rob de YP:. Holmium - YAG doped crystals emitting at 2097 nm and form an effective laser operates at wavelengths infrared power absorbed by the tissue water-bearing .. Of it, that de typically operate in a spring, and passed through optical fiber surgical devices to resurface joints, remove rot from teeth, vaporize cancers, and pulverize kidney and gall bladder stones.
Laser Infrared

Used for infrared laser Katif is usually very short pulse. Laser Titanium - Sapphire mashup (T: sapphire) produces a very thermal constraints in solid-state lasers arise from the authority described the pump, which manifests itself in the form of heat and power acoustic energy. This heat, when combined with high thermal optical coefficient (d n / d T) can lead to thermal Lensing, as well as low quantum efficiency .. These types of issues could be overcome by another novel diode, pumped solid-state lasers, diode pumped thin disk lasers .. Thermal constraints of this type of laser can be alleviated by using engineering laser-medium thickness is smaller than the diameter pump beam .. This allows for more even thermal gradient in the material. Thin disk laser has proven to produce up to levels of kilowatts of electricity.
Eliezer uses

Of laser ranging in size from microscopic diode lasers (top) with many applications, on a football field sized neodymium. Laser glass (bottom) used for inertial confinement fusion, nuclear weapons research and other high energy density physics experiments
Ttabiyat Eliezer

. When the laser was invented in 1960, was called "search for a solution to the problem." [23]) and since then, they are everywhere, and a tool in thousands of applications are extremely diverse in every section of modern society, including consumer electronics, information technology, science, medicine, industry, law enforcement, entertainment, and military .. . The first application of lasers visible in the daily life of the general population was the supermarket barcode scanner and introduced in 1974. Player, introduced in 1978, was the first successful consumer product to include a laser, but the CD player was the first laser-equipped device to become truly common in the homes of consumers, beginning in 1982, shortly after the laser printers.
Some other applications

-FF Medicine: Surgery without blood, healing laser and surgical treatment, Kidney stones, treatment, ophthalmic, dental - Industry: cutting, welding, material heat treatment, cutting off - Defence: Highlight the objectives, guiding munitions, missile defense, anti-electro-optic radar , blinding enemy troops. - Search: spectral analysis, laser ablation, steel laser, laser scattering, laser interference, to run on - Product development / commercial: laser printers, CD-ROMs, scanners, barcode, temperature, laser pointers, holograms.
Eliezer and weapons

. Lasers are known as weapons systems used in science fiction movies, but the actual laser weapons are only beginning to enter the market, and an overview of the laser beam weapons that are hit the target with a train of short pulses of light .. And evaporation and the rapid expansion in the surface cause a shock to damage the target. . Energy required for high-level project of the laser beam of this kind is difficult for the current power mobile phone technology. Public models that work chemically laser bio-gas. Laser for all but the lowest powers can be used in incapacitating weapons, through their ability to produce temporary or permanent loss of vision in varying degrees, while aimed at the eyes. .