Infolinks

Friday 23 September 2011

Nuclear physics

Nuclear physics is the field of physics that studies the building blocks and interactions of atomic nuclei. The most commonly known applications of nuclear physics are nuclear power and nuclear weapons, but the research has provided wider applications, including those in medicine (nuclear medicine, magnetic resonance imaging), materials engineering (ion implantation) and archaeology (radiocarbon dating).

The field of particle physics evolved out of nuclear physics and, for this reason, has been included under the same term in earlier times.


History

The history of nuclear physics as a discipline distinct from atomic physics starts with the discovery of radioactivity by Henri Becquerel in 1896,while investigating phosphorescence in uranium salts.The discovery of the electron by J. J. Thomson a year later was an indication that the atom had internal structure. At the turn of the 20th century the accepted model of the atom was J. J. Thomson's "plum pudding" model in which the atom was a large positively charged ball with small negatively charged electrons embedded inside of it. By the turn of the century physicists had also discovered three types of radiation coming from atoms, which they named alpha, beta, and gamma radiation. Experiments in 1911 by Lise Meitner and Otto Hahn, and by James Chadwick in 1914 discovered that the beta decay spectrum was continuous rather than discrete. That is, electrons were ejected from the atom with a range of energies, rather than the discrete amounts of energies that were observed in gamma and alpha decays. This was a problem for nuclear physics at the time, because it indicated that energy was not conserved in these decays.

In 1905, Albert Einstein formulated the idea of mass–energy equivalence. While the work on radioactivity by Becquerel, Pierre and Marie Curie predates this, an explanation of the source of the energy of radioactivity would have to wait for the discovery that the nucleus itself was composed of smaller constituents, the nucleons.
Rutherford's team discovers the nucleus

In 1907 Ernest Rutherford published "Radiation of the α Particle from Radium in passing through Matter".[3] Geiger expanded on this work in a communication to the Royal Society[4] with experiments he and Rutherford had done passing α particles through air, aluminum foil and gold leaf. More work was published in 1909 by Geiger and Marsden and further greatly expanded work was published in 1910 by Geiger,[6] In 1911-2 Rutherford went before the Royal Society to explain the experiments and propound the new theory of the atomic nucleus as we now understand it.

The key experiment behind this announcement happened in 1910 as Ernest Rutherford's team performed a remarkable experiment in which Hans Geiger and Ernest Marsden under his supervision fired alpha particles (helium nuclei) at a thin film of gold foil. The plum pudding model predicted that the alpha particles should come out of the foil with their trajectories being at most slightly bent. Rutherford had the idea to instruct his team to look for something that shocked him to actually observe: a few particles were scattered through large angles, even completely backwards, in some cases. He likened it to firing a bullet at tissue paper and having it bounce off. The discovery, beginning with Rutherford's analysis of the data in 1911, eventually led to the Rutherford model of the atom, in which the atom has a very small, very dense nucleus containing most of its mass, and consisting of heavy positively charged particles with embedded electrons in order to balance out the charge (since the neutron was unknown). As an example, in this model (which is not the modern one) nitrogen-14 consisted of a nucleus with 14 protons and 7 electrons (21 total particles), and the nucleus was surrounded by 7 more orbiting electrons.

The Rutherford model worked quite well until studies of nuclear spin were carried out by Franco Rasetti at the California Institute of Technology in 1929. By 1925 it was known that protons and electrons had a spin of 1/2, and in the Rutherford model of nitrogen-14, 20 of the total 21 nuclear particles should have paired up to cancel each other's spin, and the final odd particle should have left the nucleus with a net spin of 1/2. Rasetti discovered, however, that nitrogen-14 has a spin of 1.
James Chadwick discovers the neutron

In 1932 Chadwick realized that radiation that had been observed by Walther Bothe, Herbert L. Becker, Irène and Frédéric Joliot-Curie was actually due to a neutral particle of about the same mass as the proton, that he called the neutron (following a suggestion about the need for such a particle, by Rutherford). In the same year Dmitri Ivanenko suggested that neutrons were in fact spin 1/2 particles and that the nucleus contained neutrons to explain the mass not due to protons, and that there were no electrons in the nucleus—only protons and neutrons. The neutron spin immediately solved the problem of the spin of nitrogen-14, as the one unpaired proton and one unpaired neutron in this model, each contribute a spin of 1/2 in the same direction, for a final total spin of 1.

With the discovery of the neutron, scientists at last could calculate what fraction of binding energy each nucleus had, from comparing the nuclear mass with that of the protons and neutrons which composed it. Differences between nuclear masses were calculated in this way and—when nuclear reactions were measured—were found to agree with Einstein's calculation of the equivalence of mass and energy to high accuracy (within 1% as of in 1934).
Yukawa's meson postulated to bind nuclei

In 1935 Hideki Yukawa proposed the first significant theory of the strong force to explain how the nucleus holds together. In the Yukawa interaction a virtual particle, later called a meson, mediated a force between all nucleons, including protons and neutrons. This force explained why nuclei did not disintegrate under the influence of proton repulsion, and it also gave an explanation of why the attractive strong force had a more limited range than the electromagnetic repulsion between protons. Later, the discovery of the pi meson showed it to have the properties of Yukawa's particle.

With Yukawa's papers, the modern model of the atom was complete. The center of the atom contains a tight ball of neutrons and protons, which is held together by the strong nuclear force, unless it is too large. Unstable nuclei may undergo alpha decay, in which they emit an energetic helium nucleus, or beta decay, in which they eject an electron (or positron). After one of these decays the resultant nucleus may be left in an excited state, and in this case it decays to its ground state by emitting high energy photons (gamma decay).

The study of the strong and weak nuclear forces (the latter explained by Enrico Fermi via Fermi's interaction in 1934) led physicists to collide nuclei and electrons at ever higher energies. This research became the science of particle physics, the crown jewel of which is the standard model of particle physics which unifies the strong, weak, and electromagnetic forces.
Modern nuclear physics
Main articles: Liquid-drop model and Nuclear shell model

A heavy nucleus can contain hundreds of nucleons which means that with some approximation it can be treated as a classical system, rather than a quantum-mechanical one. In the resulting liquid-drop model, the nucleus has an energy which arises partly from surface tension and partly from electrical repulsion of the protons. The liquid-drop model is able to reproduce many features of nuclei, including the general trend of binding energy with respect to mass number, as well as the phenomenon of nuclear fission.

Superimposed on this classical picture, however, are quantum-mechanical effects, which can be described using the nuclear shell model, developed in large part by Maria Goeppert-Mayer. Nuclei with certain numbers of neutrons and protons (the magic numbers 2, 8, 20, 50, 82, 126, ...) are particularly stable, because their shells are filled.

Other more complicated models for the nucleus have also been proposed, such as the interacting boson model, in which pairs of neutrons and protons interact as bosons, analogously to Cooper pairs of electrons.

Much of current research in nuclear physics relates to the study of nuclei under extreme conditions such as high spin and excitation energy. Nuclei may also have extreme shapes (similar to that of Rugby balls) or extreme neutron-to-proton ratios. Experimenters can create such nuclei using artificially induced fusion or nucleon transfer reactions, employing ion beams from an accelerator. Beams with even higher energies can be used to create nuclei at very high temperatures, and there are signs that these experiments have produced a phase transition from normal nuclear matter to a new state, the quark-gluon plasma, in which the quarks mingle with one another, rather than being segregated in triplets as they are in neutrons and protons.
Modern topics in nuclear physics
Spontaneous changes from one nuclide to another: nuclear decay

There are 80 elements which have at least one stable isotope (defined as isotopes never observed to decay), and in total there are about 256 such stable isotopes. However, there are thousands more well-characterized isotopes which are unstable. These radioisotopes may be unstable and decay in all timescales ranging from fractions of a second to weeks, years, or many billions of years.

For example, if a nucleus has too few or too many neutrons it may be unstable, and will decay after some period of time. For example, in a process called beta decay a nitrogen-16 atom (7 protons, 9 neutrons) is converted to an oxygen-16 atom (8 protons, 8 neutrons) within a few seconds of being created. In this decay a neutron in the nitrogen nucleus is turned into a proton and an electron and antineutrino, by the weak nuclear force. The element is transmuted to another element in the process, because while it previously had seven protons (which makes it nitrogen) it now has eight (which makes it oxygen).

In alpha decay the radioactive element decays by emitting a helium nucleus (2 protons and 2 neutrons), giving another element, plus helium-4. In many cases this process continues through several steps of this kind, including other types of decays, until a stable element is formed.

In gamma decay, a nucleus decays from an excited state into a lower state by emitting a gamma ray. It is then stable. The element is not changed in the process.

Other more exotic decays are possible (see the main article). For example, in internal conversion decay, the energy from an excited nucleus may be used to eject one of the inner orbital electrons from the atom, in a process which produces high speed electrons, but is not beta decay, and (unlike beta decay) does not transmute one element to another.
Nuclear fusion
When two low mass nuclei come into very close contact with each other it is possible for the strong force to fuse the two together. It takes a great deal of energy to push the nuclei close enough together for the strong or nuclear forces to have an effect, so the process of nuclear fusion can only take place at very high temperatures or high pressures. Once the nuclei are close enough together the strong force overcomes their electromagnetic repulsion and squishes them into a new nucleus. A very large amount of energy is released when light nuclei fuse together because the binding energy per nucleon increases with mass number up until nickel-62. Stars like our sun are powered by the fusion of four protons into a helium nucleus, two positrons, and two neutrinos. The uncontrolled fusion of hydrogen into helium is known as thermonuclear runaway. Research to find an economically viable method of using energy from a controlled fusion reaction is currently being undertaken by various research establishments (see JET and ITER).
Nuclear fission
Main article: Nuclear fission

For nuclei heavier than nickel-62 the binding energy per nucleon decreases with the mass number. It is therefore possible for energy to be released if a heavy nucleus breaks apart into two lighter ones. This splitting of atoms is known as nuclear fission.

The process of alpha decay may be thought of as a special type of spontaneous nuclear fission. This process produces a highly asymmetrical fission because the four particles which make up the alpha particle are especially tightly bound to each other, making production of this nucleus in fission particularly likely.

For certain of the heaviest nuclei which produce neutrons on fission, and which also easily absorb neutrons to initiate fission, a self-igniting type of neutron-initiated fission can be obtained, in a so-called chain reaction. (Chain reactions were known in chemistry before physics, and in fact many familiar processes like fires and chemical explosions are chemical chain reactions.) The fission or "nuclear" chain-reaction, using fission-produced neutrons, is the source of energy for nuclear power plants and fission type nuclear bombs such as the two that the United States used against Hiroshima and Nagasaki at the end of World War II. Heavy nuclei such as uranium and thorium may undergo spontaneous fission, but they are much more likely to undergo decay by alpha decay.

For a neutron-initiated chain-reaction to occur, there must be a critical mass of the element present in a certain space under certain conditions (these conditions slow and conserve neutrons for the reactions). There is one known example of a natural nuclear fission reactor, which was active in two regions of Oklo, Gabon, Africa, over 1.5 billion years ago. Measurements of natural neutrino emission have demonstrated that around half of the heat emanating from the Earth's core results from radioactive decay. However, it is not known if any of this results from fission chain-reactions.
Production of heavy elements

According to the theory, as the Universe cooled after the big bang it eventually became possible for particles as we know them to exist. The most common particles created in the big bang which are still easily observable to us today were protons (hydrogen) and electrons (in equal numbers). Some heavier elements were created as the protons collided with each other, but most of the heavy elements we see today were created inside of stars during a series of fusion stages, such as the proton-proton chain, the CNO cycle and the triple-alpha process. Progressively heavier elements are created during the evolution of a star. Since the binding energy per nucleon peaks around iron, energy is only released in fusion processes occurring below this point. Since the creation of heavier nuclei by fusion costs energy, nature resorts to the process of neutron capture. Neutrons (due to their lack of charge) are readily absorbed by a nucleus. The heavy elements are created by either a slow neutron capture process (the so-called s process) or by the rapid, or r process. The s process occurs in thermally pulsing stars (called AGB, or asymptotic giant branch stars) and takes hundreds to thousands of years to reach the heaviest elements of lead and bismuth. The r process is thought to occur in supernova explosions because the conditions of high temperature, high neutron flux and ejected matter are present. These stellar conditions make the successive neutron captures very fast, involving very neutron-rich species which then beta-decay to heavier elements, especially at the so-called waiting points that correspond to more stable nuclides with closed neutron shells (magic numbers). The r process duration is typically in the range of a few seconds.

Mechanics

Mechanics (Greek Μηχανική) is the branch of physics concerned with the behavior of physical bodies when subjected to forces or displacements, and the subsequent effects of the bodies on their environment. The discipline has its roots in several ancient civilizations (see History of classical mechanics and Timeline of classical mechanics). During the early modern period, scientists such as Galileo, Kepler, and especially Newton, laid the foundation for what is now known as classical mechanics.


The system of study of mechanics is shown in the table below:
Branches of mechanics
 

Classical versus quantum


The major division of the mechanics discipline separates classical mechanics from quantum mechanics.

Historically, classical mechanics came first, while quantum mechanics is a comparatively recent invention. Classical mechanics originated with Isaac Newton's Laws of motion in Principia Mathematica, while quantum mechanics didn't appear until 1900. Both are commonly held to constitute the most certain knowledge that exists about physical nature. Classical mechanics has especially often been viewed as a model for other so-called exact sciences. Essential in this respect is the relentless use of mathematics in theories, as well as the decisive role played by experiment in generating and testing them.

Quantum mechanics is of a wider scope, as it encompasses classical mechanics as a sub-discipline which applies under certain restricted circumstances. According to the correspondence principle, there is no contradiction or conflict between the two subjects, each simply pertains to specific situations. The correspondence principle states that the behavior of systems described by quantum theories reproduces classical physics in the limit of large quantum numbers. Quantum mechanics has superseded classical mechanics at the foundational level and is indispensable for the explanation and prediction of processes at molecular and (sub)atomic level. However, for macroscopic processes classical mechanics is able to solve problems which are unmanageably difficult in quantum mechanics and hence remains useful and well used.


Einsteinian versus Newtonian

Analogous to the quantum versus classical reformation, Einstein's general and special theories of relativity have expanded the scope of mechanics beyond the mechanics of Newton and Galileo, and made fundamental corrections to them, that become significant and even dominant as speeds of material objects approach the speed of light, which cannot be exceeded. Relativistic corrections are also needed for quantum mechanics, although General relativity has not been integrated; the two theories remain incompatible, a hurdle which must be overcome in developing the Grand Unified Theory


History

   
Antiquity
Main article: Aristotelian mechanics

The main theory of mechanics in antiquity was Aristotelian mechanics. A later developer in this tradition was Hipparchus.
Medieval age
Main article: Theory of impetus

In the Middle Ages, Aristotle's theories were criticized and modified by a number of figures, beginning with John Philoponus in the 6th century. A central problem was that of projectile motion, which was discussed by Hipparchus and Philoponus. This led to the development of the theory of impetus by 14th century French Jean Buridan, which developed into the modern theories of inertia, velocity, acceleration and momentum. This work and others was developed in 14th century England by the Oxford Calculators such as Thomas Bradwardine, who studied and formulated various laws regarding falling bodies.

On the question of a body subject to a constant (uniform) force, the 12th century Jewish-Arab Nathanel (Iraqi, of Baghdad) stated that constant force imparts constant acceleration, while the main properties are uniformly accelerated motion (as of falling bodies) was worked out by the 14th century Oxford Calculators.
Early modern age

Two central figures in the early modern age are Galileo Galilei and Isaac Newton. Galileo's final statement of his mechanics, particularly of falling bodies, is his Two New Sciences (1638). Newton's 1687 Philosophiæ Naturalis Principia Mathematica provided a detailed mathematical account of mechanics, using the newly developed mathematics of calculus and providing the basis of Newtonian mechanics.

There is some dispute over priority of various ideas: Newton's Principia is certainly the seminal work and has been tremendously influential, and the systematic mathematics therein did not and could not have been stated earlier because calculus had not been developed. However, many of the ideas, particularly as pertain to inertia (impetus) and falling bodies had been developed and stated by earlier researchers, both the then-recent Galileo and the less-known medieval predecessors. Precise credit is at times difficult or contentious because scientific language and standards of proof changed, so whether medieval statements are equivalent to modern statements or sufficient proof, or instead similar to modern statements and hypotheses is often debatable.
Modern age

Two main modern developments in mechanics are general relativity of Einstein, and quantum mechanics, both developed in the 20th century based in part on earlier 19th century ideas.
Types of mechanical bodies

Thus the often-used term body needs to stand for a wide assortment of objects, including particles, projectiles, spacecraft, stars, parts of machinery, parts of solids, parts of fluids (gases and liquids), etc.

Other distinctions between the various sub-disciplines of mechanics, concern the nature of the bodies being described. Particles are bodies with little (known) internal structure, treated as mathematical points in classical mechanics. Rigid bodies have size and shape, but retain a simplicity close to that of the particle, adding just a few so-called degrees of freedom, such as orientation in space.

Otherwise, bodies may be semi-rigid, i.e. elastic, or non-rigid, i.e. fluid. These subjects have both classical and quantum divisions of study.

For instance, the motion of a spacecraft, regarding its orbit and attitude (rotation), is described by the relativistic theory of classical mechanics, while the analogous movements of an atomic nucleus are described by quantum mechanics.
Sub-disciplines in mechanics

The following are two lists of various subjects that are studied in mechanics.

Note that there is also the "theory of fields" which constitutes a separate discipline in physics, formally treated as distinct from mechanics, whether classical fields or quantum fields. But in actual practice, subjects belonging to mechanics and fields are closely interwoven. Thus, for instance, forces that act on particles are frequently derived from fields (electromagnetic or gravitational), and particles generate fields by acting as sources. In fact, in quantum mechanics, particles themselves are fields, as described theoretically by the wave function.
Classical mechanics

The following are described as forming Classical mechanics:

    * Newtonian mechanics, the original theory of motion (kinematics) and forces (dynamics)
    * Hamiltonian mechanics, a theoretical formalism, based on the principle of conservation of energy
    * Lagrangian mechanics, another theoretical formalism, based on the principle of the least action
    * Celestial mechanics, the motion of heavenly bodies: planets, comets, stars, galaxies, etc.
    * Astrodynamics, spacecraft navigation, etc.
    * Solid mechanics, elasticity, the properties of deformable bodies.
    * Fracture mechanics
    * Acoustics, sound ( = density variation propagation) in solids, fluids and gases.
    * Statics, semi-rigid bodies in mechanical equilibrium
    * Fluid mechanics, the motion of fluids
    * Soil mechanics, mechanical behavior of soils
    * Continuum mechanics, mechanics of continua (both solid and fluid)
    * Hydraulics, mechanical properties of liquids
    * Fluid statics, liquids in equilibrium
    * Applied mechanics, or Engineering mechanics
    * Biomechanics, solids, fluids, etc. in biology
    * Biophysics, physical processes in living organisms
    * Statistical mechanics, assemblies of particles too large to be described in a deterministic way
    * Relativistic or Einsteinian mechanics, universal gravitation

Quantum mechanics

The following are categorized as being part of Quantum mechanics:

    * Particle physics, the motion, structure, and reactions of particles
    * Nuclear physics, the motion, structure, and reactions of nuclei
    * Condensed matter physics, quantum gases, solids, liquids, etc.
    * Quantum statistical mechanics, large assemblies of particles

Professional organizations

    * Applied Mechanics Division, American Society of Mechanical Engineers
    * Fluid Dynamics Division, American Physical Society
    * Institution of Mechanical Engineers is the United Kingdom's qualifying body for Mechanical Engineers and has been the home of Mechanical Engineers for over 150 years.
    * International Union of Theoretical and Applied Mechanics


Material physics is the use of physics to describe materials in many different ways such as force, heat, light and mechanics. It is a synthesis of physical sciences such as chemistry, solid mechanics and solid state physics.

High Energy Physics

Particle physics is a branch of physics that studies the elementary subatomic constituents of matter and radiation, and the interactive relationship between them. It is also called high energy physics, because many elementary particles do not occur under normal circumstances in nature due to energetic instability, but can be created and detected during high energy collisions with other particles, as is done in particle accelerators.

Scientific research in this area has produced a long list of particles.


Subatomic particles

Modern particle physics research is focused on subatomic particles, including atomic constituents such as electrons, protons, and neutrons (protons and neutrons are actually composite particles, made up of quarks), particles produced by radioactive and scattering processes, such as photons, neutrinos, and muons, as well as a wide range of exotic particles.

Strictly speaking, the term particle is a misnomer because the dynamics of particle physics are governed by quantum mechanics. As such, they exhibit wave-particle duality, displaying particle-like behavior under certain experimental conditions and wave-like behavior in others (more technically they are described by state vectors in a Hilbert space; see quantum field theory). Following the convention of particle physicists, "elementary particles" refer to objects such as electrons and photons, it is well known that these "particles" display wave-like properties as well.

All the particles and their interactions observed to date can almost be described entirely by a quantum field theory called the Standard Model. The Standard Model has 17 species of elementary particles: 12 fermions (24 if you count antiparticles separately), 4 vector bosons (5 if you count antiparticles separately), and 1 scalar boson. These elementary particles can combine to form composite particles, accounting for the hundreds of other species of particles discovered since the 1960s. The Standard Model has been found to agree with almost all the experimental tests conducted to date. However, most particle physicists believe that it is an incomplete description of nature, and that a more fundamental theory awaits discovery. In recent years, measurements of neutrino mass have provided the first experimental deviations from the Standard Model.

Particle physics has had a large impact on the philosophy of science. Some particle physicists adhere to reductionism, a point of view that has been criticized and defended by philosophers and scientists. Part of the debate is described below.
History

The idea that all matter is composed of elementary particles dates to at least the 6th century BC. The philosophical doctrine of atomism and the nature of elementary particles were studied by ancient Greek philosophers such as Leucippus, Democritus and Epicurus; ancient Indian philosophers such as Kanada, Dignāga and Dharmakirti; medieval scientists such as Alhazen, Avicenna and Algazel; and early modern European physicists such as Pierre Gassendi, Robert Boyle and Isaac Newton. The particle theory of light was also proposed by Alhazen, Avicenna, Gassendi and Newton. These early ideas were founded in abstract, philosophical reasoning rather than experimentation and empirical observation.

In the 19th century, John Dalton, through his work on stoichiometry, concluded that each element of nature was composed of a single, unique type of particle. Dalton and his contemporaries believed these were the fundamental particles of nature and thus named them atoms, after the Greek word atomos, meaning "indivisible". However, near the end of the century, physicists discovered that atoms were not, in fact, the fundamental particles of nature, but conglomerates of even smaller particles. The early 20th century explorations of nuclear physics and quantum physics culminated in proofs of nuclear fission in 1939 by Lise Meitner (based on experiments by Otto Hahn), and nuclear fusion by Hans Bethe in the same year. These discoveries gave rise to an active industry of generating one atom from another, even rendering possible (although not profitable) the transmutation of lead into gold. They also led to the development of nuclear weapons. Throughout the 1950s and 1960s, a bewildering variety of particles were found in scattering experiments. This was referred to as the "particle zoo". This term was deprecated after the formulation of the Standard Model during the 1970s in which the large number of particles was explained as combinations of a (relatively) small number of fundamental particles.
The Standard Model
Main article: Standard Model

The current state of the classification of elementary particles is the Standard Model. It describes the strong, weak, and electromagnetic fundamental forces, using mediating gauge bosons. The species of gauge bosons are the gluons, W− and W+ and Z bosons, and the photons. The model also contains 24 fundamental particles, which are the constituents of matter. Finally, it predicts the existence of a type of boson known as the Higgs boson, which is yet to be discovered.
Experimental laboratories

In particle physics, the major international laboratories are:

    * Brookhaven National Laboratory (Long Island, United States). Its main facility is the Relativistic Heavy Ion Collider (RHIC) which collides heavy ions such as gold ions and polarized protons. It is the world's first heavy ion collider, and the world's only polarized proton collider.
    * Budker Institute of Nuclear Physics (Novosibirsk, Russia)
    * CERN, (Franco-Swiss border, near Geneva). Its main project is now the Large Hadron Collider (LHC), which had its first beam circulation on 10 September 2008, and is now the world's most energetic collider of protons. It will also be the most energetic collider of heavy ions when it begins colliding lead ions in 2010. Earlier facilities include the Large Electron–Positron Collider (LEP), which was stopped in 2001 and then dismantled to give way for LHC; and the Super Proton Synchrotron, which is being reused as a pre-accelerator for LHC.
    * DESY (Hamburg, Germany). Its main facility is the Hadron Elektron Ring Anlage (HERA), which collides electrons and positrons with protons.
    * Fermilab, (Batavia, United States). Its main facility is the Tevatron, which collides protons and antiprotons and was the highest energy particle collider in the world until the Large Hadron Collider surpassed it on 29 November 2009.
    * KEK, (Tsukuba, Japan). It is the home of a number of experiments such as K2K, a neutrino oscillation experiment and Belle, an experiment measuring the CP violation of B mesons.
    * SLAC National Accelerator Laboratory (Menlo Park, United States). Its main facility is PEP-II, which collides electrons and positrons.

Many other particle accelerators exist.



Theoretical particle physics attempts to develop the models, theoretical framework, and mathematical tools to understand current experiments and make predictions for future experiments. See also theoretical physics. There are several major interrelated efforts in theoretical particle physics today. One important branch attempts to better understand the standard model and its tests. By extracting the parameters of the Standard Model from experiments with less uncertainty, this work probes the limits of the Standard Model and therefore expands our understanding of nature's building blocks. These efforts are made challenging by the difficulty of calculating quantities in quantum chromodynamics. Some theorists working in this area refer to themselves as phenomenologists and may use the tools of quantum field theory and effective field theory. Others make use of lattice field theory and call themselves lattice theorists.

Another major effort is in model building where model builders develop ideas for what physics may lie beyond the Standard Model (at higher energies or smaller distances). This work is often motivated by the hierarchy problem and is constrained by existing experimental data. It may involve work on supersymmetry, alternatives to the Higgs mechanism, extra spatial dimensions (such as the Randall-Sundrum models), Preon theory, combinations of these, or other ideas.

A third major effort in theoretical particle physics is string theory. String theorists attempt to construct a unified description of quantum mechanics and general relativity by building a theory based on small strings, and branes rather than particles. If the theory is successful, it may be considered a "Theory of Everything".

There are also other areas of work in theoretical particle physics ranging from particle cosmology to loop quantum gravity.

This division of efforts in particle physics is reflected in the names of categories on the preprint archive hep-th (theory), hep-ph (phenomenology), hep-ex (experiments), hep-lat (lattice gauge theory).
The future
   

Particle physicists internationally agree on the most important goals of particle physics research in the near and intermediate future. The overarching goal, which is pursued in several distinct ways, is to find and understand what physics may lie beyond the standard model. There are several powerful experimental reasons to expect new physics, including dark matter and neutrino mass. There are also theoretical hints that this new physics should be found at accessible energy scales. Most importantly, though, there may be unexpected and unpredicted surprises which will give us the most opportunity to learn about nature.

Much of the efforts to find this new physics are focused on new collider experiments. A (relatively) near term goal is the completion of the Large Hadron Collider (LHC) in 2008 which will continue the search for the Higgs boson, supersymmetric particles, and other new physics. An intermediate goal is the construction of the International Linear Collider (ILC) which will complement the LHC by allowing more precise measurements of the properties of newly found particles. A decision for the technology of the ILC has been taken in August 2004, but the site has still to be agreed upon.

Additionally, there are important non-collider experiments which also attempt to find and understand physics beyond the Standard Model. One important non-collider effort is the determination of the neutrino masses since these masses may arise from neutrinos mixing with very heavy particles. In addition, cosmological observations provide many useful constraints on the dark matter, although it may be impossible to determine the exact nature of the dark matter without the colliders. Finally, lower bounds on the very long lifetime of the proton put constraints on Grand Unification Theories at energy scales much higher than collider experiments will be able to probe any time soon.



The techniques required to do modern experimental particle physics are quite varied and complex, constituting a sub-specialty nearly completely distinct from the theoretical side of the field. See Category:Experimental particle physics for a partial list of the ideas required for such experiments.

Fluid dynamics



Fluid dynamics offers a systematic structure that underlies these practical disciplines, that embraces empirical and semi-empirical laws derived from flow measurement and used to solve practical problems. The solution to a fluid dynamics problem typically involves calculating various properties of the fluid, such as velocity, pressure, density, and temperature, as functions of space and time.

Historically, hydrodynamics meant something different than it does today. Before the twentieth century, hydrodynamics was synonymous with fluid dynamics. This is still reflected in names of some fluid dynamics topics, like magnetohydrodynamics and hydrodynamic stability—both also applicable in, as well as being applied to, gases

Equations of fluid dynamics

The foundational axioms of fluid dynamics are the conservation laws, specifically, conservation of mass, conservation of linear momentum (also known as Newton's Second Law of Motion), and conservation of energy (also known as First Law of Thermodynamics). These are based on classical mechanics and are modified in quantum mechanics and general relativity. They are expressed using the Reynolds Transport Theorem.

In addition to the above, fluids are assumed to obey the continuum assumption. Fluids are composed of molecules that collide with one another and solid objects. However, the continuum assumption considers fluids to be continuous, rather than discrete. Consequently, properties such as density, pressure, temperature, and velocity are taken to be well-defined at infinitesimally small points, and are assumed to vary continuously from one point to another. The fact that the fluid is made up of discrete molecules is ignored.

For fluids which are sufficiently dense to be a continuum, do not contain ionized species, and have velocities small in relation to the speed of light, the momentum equations for Newtonian fluids are the Navier-Stokes equations, which is a non-linear set of differential equations that describes the flow of a fluid whose stress depends linearly on velocity gradients and pressure. The unsimplified equations do not have a general closed-form solution, so they are primarily of use in Computational Fluid Dynamics. The equations can be simplified in a number of ways, all of which make them easier to solve. Some of them allow appropriate fluid dynamics problems to be solved in closed form.

In addition to the mass, momentum, and energy conservation equations, a thermodynamical equation of state giving the pressure as a function of other thermodynamic variables for the fluid is required to completely specify the problem. An example of this would be the perfect gas equation of state:

p= \frac{\rho R_u T}{M}
where p is pressure, ρ is density, Ru is the gas constant, M is the molar mass and T is temperature.

Compressible vs incompressible flow

All fluids are compressible to some extent, that is changes in pressure or temperature will result in changes in density. However, in many situations the changes in pressure and temperature are sufficiently small that the changes in density are negligible. In this case the flow can be modeled as an incompressible flow. Otherwise the more general compressible flow equations must be used.

Mathematically, incompressibility is expressed by saying that the density ρ of a fluid parcel does not change as it moves in the flow field, i.e.,

\frac{\mathrm{D} \rho}{\mathrm{D}t} = 0 \, ,
where D / Dt is the substantial derivative, which is the sum of local and convective derivatives. This additional constraint simplifies the governing equations, especially in the case when the fluid has a uniform density.

For flow of gases, to determine whether to use compressible or incompressible fluid dynamics, the Mach number of the flow is to be evaluated. As a rough guide, compressible effects can be ignored at Mach numbers below approximately 0.3. For liquids, whether the incompressible assumption is valid depends on the fluid properties (specifically the critical pressure and temperature of the fluid) and the flow conditions (how close to the critical pressure the actual flow pressure becomes). Acoustic problems always require allowing compressibility, since sound waves are compression waves involving changes in pressure and density of the medium through which they propagate.
Viscous vs inviscid flow

Viscous problems are those in which fluid friction has significant effects on the fluid motion.

The Reynolds number, which is a ratio between inertial and viscous forces, can be used to evaluate whether viscous or inviscid equations are appropriate to the problem.

Stokes flow is flow at very low Reynolds numbers, Re<<1, such that inertial forces can be neglected compared to viscous forces.

On the contrary, high Reynolds numbers indicate that the inertial forces are more significant than the viscous (friction) forces. Therefore, we may assume the flow to be an inviscid flow, an approximation in which we neglect viscosity completely, compared to inertial terms.

This idea can work fairly well when the Reynolds number is high. However, certain problems such as those involving solid boundaries, may require that the viscosity be included. Viscosity often cannot be neglected near solid boundaries because the no-slip condition can generate a thin region of large strain rate (known as Boundary layer) which enhances the effect of even a small amount of viscosity, and thus generating vorticity. Therefore, to calculate net forces on bodies (such as wings) we should use viscous flow equations. As illustrated by d'Alembert's paradox, a body in an inviscid fluid will experience no drag force. The standard equations of inviscid flow are the Euler equations. Another often used model, especially in computational fluid dynamics, is to use the Euler equations away from the body and the boundary layer equations, which incorporates viscosity, in a region close to the body.

The Euler equations can be integrated along a streamline to get Bernoulli's equation. When the flow is everywhere irrotational and inviscid, Bernoulli's equation can be used throughout the flow field. Such flows are called potential flows.
Steady vs unsteady flow
Hydrodynamics simulation of the Rayleigh–Taylor instability

When all the time derivatives of a flow field vanish, the flow is considered to be a steady flow. Steady-state flow refers to the condition where the fluid properties at a point in the system do not change over time. Otherwise, flow is called unsteady. Whether a particular flow is steady or unsteady, can depend on the chosen frame of reference. For instance, laminar flow over a sphere is steady in the frame of reference that is stationary with respect to the sphere. In a frame of reference that is stationary with respect to a background flow, the flow is unsteady.

Turbulent flows are unsteady by definition. A turbulent flow can, however, be statistically stationary. According to Pope;

    The random field U(x,t) is statistically stationary if all statistics are invariant under a shift in time.

This roughly means that all statistical properties are constant in time. Often, the mean field is the object of interest, and this is constant too in a statistically stationary flow.

Steady flows are often more tractable than otherwise similar unsteady flows. The governing equations of a steady problem have one dimension less (time) than the governing equations of the same problem without taking advantage of the steadiness of the flow field.


Laminar vs turbulent flow

Turbulence is flow characterized by recirculation, eddies, and apparent randomness. Flow in which turbulence is not exhibited is called laminar. It should be noted, however, that the presence of eddies or recirculation alone does not necessarily indicate turbulent flow—these phenomena may be present in laminar flow as well. Mathematically, turbulent flow is often represented via a Reynolds decomposition, in which the flow is broken down into the sum of an average component and a perturbation component.

It is believed that turbulent flows can be described well through the use of the Navier–Stokes equations. Direct numerical simulation (DNS), based on the Navier–Stokes equations, makes it possible to simulate turbulent flows at moderate Reynolds numbers. Restrictions depend on the power of the computer used and the efficiency of the solution algorithm. The results of DNS agree with the experimental data.

Most flows of interest have Reynolds numbers much too high for DNS to be a viable option, given the state of computational power for the next few decades. Any flight vehicle large enough to carry a human (L > 3 m), moving faster than 72 km/h (20 m/s) is well beyond the limit of DNS simulation (Re = 4 million). Transport aircraft wings (such as on an Airbus A300 or Boeing 747) have Reynolds numbers of 40 million (based on the wing chord). In order to solve these real-life flow problems, turbulence models will be a necessity for the foreseeable future. Reynolds-averaged Navier–Stokes equations (RANS) combined with turbulence modeling provides a model of the effects of the turbulent flow. Such a modeling mainly provides the additional momentum transfer by the Reynolds stresses, although the turbulence also enhances the heat and mass transfer. Another promising methodology is large eddy simulation (LES), especially in the guise of detached eddy simulation (DES)—which is a combination of RANS turbulence modeling and large eddy simulation.
Newtonian vs non-Newtonian fluids

Sir Isaac Newton showed how stress and the rate of strain are very close to linearly related for many familiar fluids, such as water and air. These Newtonian fluids are modeled by a coefficient called viscosity, which depends on the specific fluid.

However, some of the other materials, such as emulsions and slurries and some visco-elastic materials (e.g. blood, some polymers), have more complicated non-Newtonian stress-strain behaviours. These materials include sticky liquids such as latex, honey, and lubricants which are studied in the sub-discipline of rheology.
Subsonic vs transonic, supersonic and hypersonic flows

While many terrestrial flows (e.g. flow of water through a pipe) occur at low mach numbers, many flows of practical interest (e.g. in aerodynamics) occur at high fractions of the Mach Number M=1 or in excess of it (supersonic flows). New phenomena occur at these Mach number regimes (e.g. shock waves for supersonic flow, transonic instability in a regime of flows with M nearly equal to 1, non-equilibrium chemical behavior due to ionization in hypersonic flows) and it is necessary to treat each of these flow regimes separately.
Magnetohydrodynamics

Magnetohydrodynamics is the multi-disciplinary study of the flow of electrically conducting fluids in electromagnetic fields. Examples of such fluids include plasmas, liquid metals, and salt water. The fluid flow equations are solved simultaneously with Maxwell's equations of electromagnetism.
Other approximations

There are a large number of other possible approximations to fluid dynamic problems. Some of the more commonly used are listed below.

    * The Boussinesq approximation neglects variations in density except to calculate buoyancy forces. It is often used in free convection problems where density changes are small.
    * Lubrication theory and Hele-Shaw flow exploits the large aspect ratio of the domain to show that certain terms in the equations are small and so can be neglected.
    * Slender-body theory is a methodology used in Stokes flow problems to estimate the force on, or flow field around, a long slender object in a viscous fluid.
    * The shallow-water equations can be used to describe a layer of relatively inviscid fluid with a free surface, in which surface gradients are small.
    * The Boussinesq equations are applicable to surface waves on thicker layers of fluid and with steeper surface slopes.
    * Darcy's law is used for flow in porous media, and works with variables averaged over several pore-widths.
    * In rotating systems, the quasi-geostrophic approximation assumes an almost perfect balance between pressure gradients and the Coriolis force. It is useful in the study of atmospheric dynamics.

Terminology in fluid dynamics

The concept of pressure is central to the study of both fluid statics and fluid dynamics. A pressure can be identified for every point in a body of fluid, regardless of whether the fluid is in motion or not. Pressure can be measured using an aneroid, Bourdon tube, mercury column, or various other methods.

Some of the terminology that is necessary in the study of fluid dynamics is not found in other similar areas of study. In particular, some of the terminology used in fluid dynamics is not used in fluid statics.
Terminology in incompressible fluid dynamics

The concepts of total pressure and dynamic pressure arise from Bernoulli's equation and are significant in the study of all fluid flows. (These two pressures are not pressures in the usual sense—they cannot be measured using an aneroid, Bourdon tube or mercury column.) To avoid potential ambiguity when referring to pressure in fluid dynamics, many authors use the term static pressure to distinguish it from total pressure and dynamic pressure. Static pressure is identical to pressure and can be identified for every point in a fluid flow field.

In Aerodynamics, L.J. Clancy writes: To distinguish it from the total and dynamic pressures, the actual pressure of the fluid, which is associated not with its motion but with its state, is often referred to as the static pressure, but where the term pressure alone is used it refers to this static pressure.

A point in a fluid flow where the flow has come to rest (i.e. speed is equal to zero adjacent to some solid body immersed in the fluid flow) is of special significance. It is of such importance that it is given a special name—a stagnation point. The static pressure at the stagnation point is of special significance and is given its own name—stagnation pressure. In incompressible flows, the stagnation pressure at a stagnation point is equal to the total pressure throughout the flow field.
Terminology in compressible fluid dynamics

In a compressible fluid, such as air, the temperature and density are essential when determining the state of the fluid. In addition to the concept of total pressure (also known as stagnation pressure), the concepts of total (or stagnation) temperature and total (or stagnation) density are also essential in any study of compressible fluid flows. To avoid potential ambiguity when referring to temperature and density, many authors use the terms static temperature and static density. Static temperature is identical to temperature; and static density is identical to density; and both can be identified for every point in a fluid flow field.

The temperature and density at a stagnation point are called stagnation temperature and stagnation density.

A similar approach is also taken with the thermodynamic properties of compressible fluids. Many authors use the terms total (or stagnation) enthalpy and total (or stagnation) entropy. The terms static enthalpy and static entropy appear to be less common, but where they are used they mean nothing more than enthalpy and entropy respectively, and the prefix "static" is being used to avoid ambiguity with their 'total' or 'stagnation' counterparts. Because the 'total' flow conditions are defined by isentropically bringing the fluid to rest, the total (or stagnation) entropy is by definition always equal to the "static" entropy.

Dynamics

In the field of physics, the study of the causes of motion and changes in motion is dynamics. In other words the study of forces and why objects are in motion. Dynamics includes the study of the effect of torques on motion. These are in contrast to Kinematics, the branch of classical mechanics that describes the motion of objects without consideration of the causes leading to the motion.

Generally speaking, researchers involved in dynamics study how a physical system might develop or alter over time and study the causes of those changes. In addition, Isaac Newton established the undergirding physical laws which govern dynamics in physics. By studying his system of mechanics, dynamics can be understood. In particular dynamics is mostly related to Newton's second law of motion. However, all three laws of motion are taken into consideration, because these are interrelated in any given observation or experiment.

For classical electromagnetism, it is Maxwell's equations that describe the dynamics. And the dynamics of classical systems involving both mechanics and electromagnetism are described by the combination of Newton's laws, Maxwell's equations, and the Lorentz force.


Force

From Newton, force can be defined as an extertion or pressure which can cause an object to move. The concept of force is used to describe an influence which causes a free body (object) to accelerate. It can be a push or a pull, which causes an object to change direction, to speed or have new velocity, or to deform temporarily or permanently. Generally speaking, force causes an object's state of motion to change.
Newton's laws

Newton described force as the ability to cause a mass to accelerate.

    * Newton's first law states that an object in motion will stay in motion unless a force is applied. This law deals with inertia, which is a property of matter that resists acceleration and depends only on mass.
    * Newton's second law states that force quantity is equal to mass multiplied by the acceleration (F = ma).
    * Newton's third law states that for every action, there is an equal but opposite reaction.

Summary

    * Forces cause acceleration and have the ability to cause acceleration.
    * If an object is accelerating, the net force is not zero.
    * To find the net force, take the vector sum of all component forces
    * To find the acceleration, use the equation Fnet=ma.
    * If forces act at angles, trigonometry is needed to solve: [Fx=Fcos(0)] [Fy=Fsin(0)]
    * If forces act along the axes, no trigonometry is needed to solve.

cryogenics

In physics, cryogenics is the study of the production of very low temperature (below −150 °C, −238 °F or 123 K) and the behavior of materials at those temperatures. A person who studies elements under extremely cold temperature is called a cryogenicist. Rather than the relative temperature scales of Celsius and Fahrenheit, cryogenicists use the absolute temperature scales. These are Kelvin (SI units) or Rankine scale (English/US units).


Definitions and distinctions

The terms cryogenics, cryobiology, or cryonics are frequently confused. Other new terms with the prefix cryo- have also been introduced.

Cryogenics
    The branches of physics and engineering that involve the study of very low temperatures, how to produce them, and how materials behave at those temperatures.

Cryobiology
    The branch of biology involving the study of the effects of low temperatures on organisms (most often for the purpose of achieving cryopreservation).

Cryonics
    The emerging medical technology of cryopreserving humans and animals with the intention of future revival. Researchers in the field seek to apply the results of many sciences, including cryobiology, cryogenics, rheology, emergency medicine, etc.

Cryoelectronics
    The field of research regarding superconductivity at low temperatures.

Cryotronics
    The practical application of cryoelectronics.

Etymology

The word cryogenics stems from Greek and means "the production of freezing cold"; however the term is used today as a synonym for the low-temperature state. It is not well-defined at what point on the temperature scale refrigeration ends and cryogenics begins, but most scientists assume it starts at or below -240 °F (about -150 °C or 123 K). The National Institute of Standards and Technology at Boulder, Colorado has chosen to consider the field of cryogenics as that involving temperatures below −180 °C (93.15 K). This is a logical dividing line, since the normal boiling points of the so-called permanent gases (such as helium, hydrogen, neon, nitrogen, oxygen, and normal air) lie below −180 °C while the Freon refrigerants, hydrogen sulfide, and other common refrigerants have boiling points above −180 °C.
Industrial application

Cryogenic valve
Further information: Timeline of low-temperature technology

Liquefied gases, such as liquid nitrogen and liquid helium, are used in many cryogenic applications. Liquid nitrogen is the most commonly used element in cryogenics and is legally purchasable around the world. Liquid helium is also commonly used and allows for the lowest attainable temperatures to be reached.

These liquids are held in either special containers known as Dewar flasks, which are generally about six feet tall (1.8 m) and three feet (91.5 cm) in diameter, or giant tanks in larger commercial operations. Dewar flasks are named after their inventor, James Dewar, the man who first liquefied hydrogen. Museums typically display smaller vacuum flasks fitted in a protective casing.

Cryogenic transfer pumps are the pumps used on LNG piers to transfer Liquefied Natural Gas from LNG Carriers to LNG storage tanks, as are cryogenic valves.
Cryogenic processing

The field of cryogenics advanced during World War II when scientists found that metals frozen to low temperatures showed more resistance to wear. Based on this theory of cryogenic hardening, the commercial cryogenic processing industry was founded in 1966 by Ed Busch. With a background in the heat treating industry, Busch founded a company in Detroit called CryoTech in 1966. Though CryoTech later merged with 300 Below to create the largest and oldest commercial cryogenics company in the world, they originally experimented with the possibility of increasing the life of metal tools to anywhere between 200%-400% of the original life expectancy using cryogenic tempering instead of heat treating. This evolved in the late 1990s into the treatment of other parts (that did more than just increase the life of a product) such as amplifier valves (improved sound quality), baseball bats (greater sweet spot), golf clubs (greater sweet spot), racing engines (greater performance under stress), firearms (less warping after continuous shooting), knives, razor blades, brake rotors and even pantyhose. The theory was based on how heat-treating metal works (the temperatures are lowered to room temperature from a high degree causing certain strength increases in the molecular structure to occur) and supposed that continuing the descent would allow for further strength increases. Using liquid nitrogen, CryoTech formulated the first early version of the cryogenic processor. Unfortunately for the newly-born industry, the results were unstable, as components sometimes experienced thermal shock when they were cooled too quickly. Some components in early tests even shattered because of the ultra-low temperatures. In the late twentieth century, the field improved significantly with the rise of applied research, which coupled microprocessor based industrial controls to the cryogenic processor in order to create more stable results.

Cryogens, like liquid nitrogen, are further used for specialty chilling and freezing applications. Some chemical reactions, like those used to produce the active ingredients for the popular statin drugs, must occur at low temperatures of approximately −100 °C. Special cryogenic chemical reactors are used to remove reaction heat and provide a low temperature environment. The freezing of foods and biotechnology products, like vaccines, requires nitrogen in blast freezing or immersion freezing systems. Certain soft or elastic materials become hard and brittle at very low temperatures, which makes cryogenic milling (cryomilling) an option for some materials that cannot easily be milled at higher temperatures.

Cryogenic processing is not a substitute for heat treatment, but rather an extension of the heating - quenching - tempering cycle. Normally, when an item is quenched, the final temperature is ambient. The only reason for this is that most heat treaters do not have cooling equipment. There is nothing metallurgically significant about ambient temperature. The cryogenic process continues this action from ambient temperature down to −320 °F (140 °R; 78 K; −196 °C). In most instances the cryogenic cycle is followed by a heat tempering procedure. As all alloys do not have the same chemical constituents, the tempering procedure varies according to the material's chemical composition, thermal history and/or a tool's particular service application.

The entire process takes 3–4 days.
Fuels

Another use of cryogenics is cryogenic fuels. Cryogenic fuels, mainly liquid hydrogen, have been used as rocket fuels. Liquid oxygen is used as an oxidizer of hydrogen, but oxygen is not, strictly speaking, a fuel. For example, NASA's workhorse space shuttle uses cryogenic hydrogen fuel as its primary means of getting into orbit, as did all of the rockets built for the Soviet space program by Sergei Korolev. (This was a bone of contention between him and rival engine designer Valentin Glushko, who felt that cryogenic fuels were impractical for large-scale rockets such as the ill-fated N-1 rocket spacecraft.)

Russian aircraft manufacturer Tupolev developed a version of its popular design Tu-154 with a cryogenic fuel system, known as the Tu-155. The plane uses a fuel referred to as liquefied natural gas or LNG, and made its first flight in 1989.
Applications

Some applications of cryogenics are

Magnetic Resonance Imaging[MRI]: MRI is a method of imaging objects that uses a strong magnetic field to detect the relaxation of protons that have been perturbed by a radio-frequency pulse. This magnetic field is generated by electromagnets, and high field strengths can be achieved by using superconducting magnets. Traditionally, liquid helium is used to cool the coils because it has a boiling point of around 4 K at ambient pressure, and cheap metallic superconductors can be used for the coil wiring. So-called high-temperature superconducting compounds can be made to superconduct with the use of liquid nitrogen which boils at around 77 K.

Power Transmission in Big Cities: It is difficult to transmit power by over head cables in big cities. So underground cables are used. But underground cables get heated and the resistance of the wire increases leading to wastage of power. This can be solved by cryogenics. Liquefied gases are sprayed on the cables to keep them cool and reduce their resistance.

Food Freezing: Cryogenic gases are used in transportation of large masses of frozen food. When very large quantity of food must be transported to regions like war field, earthquake hit regions etc they must be stored for a long time. So cryogenic food freezing is used. Cryogenic food freezing is also helpful for large scale food processing industries.

Blood Banking: Certain blood groups which are rare are stored at very low temperatures such as -165 degree C.
Production

Cryogenic cooling of devices and material is usually achieved via the use of liquid nitrogen, liquid helium, or a cryocompressor (which uses high pressure helium lines). Newer devices such as pulse cryocoolers and Stirling cryocoolers have been devised. The most recent development in cryogenics is the use of magnets as regenerators as well as refrigerators. These devices work on the principle known as the magnetocaloric effect.
Detectors

Cryogenic temperatures, usually well below 77 K (−196 °C) are required to operate cryogenic detectors.

Condensed matter physics

Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter. In particular, it is concerned with the "condensed" phases that appear whenever the number of constituents in a system is extremely large and the interactions between the constituents are strong. The most familiar examples of condensed phases are solids and liquids, which arise from the electromagnetic forces between atoms. More exotic condensed phases include the superconducting phase exhibited by certain materials at low temperature, the ferromagnetic and antiferromagnetic phases of spins on atomic lattices, and the Bose-Einstein condensate found in certain ultracold atomic systems.

The aim of condensed matter physics is to understand the behavior of these phases by using well-established physical laws, in particular those of quantum mechanics, electromagnetism and statistical mechanics. The diversity of systems and phenomena available for study makes condensed matter physics by far the largest field of contemporary physics. By one estimate, one third of all United States physicists identify themselves as condensed matter physicists. The field has a large overlap with chemistry, materials science, and nanotechnology, and there are close connections with the related fields of atomic physics and biophysics. Theoretical condensed matter physics also shares many important concepts and techniques with theoretical particle and nuclear physics.

Historically, condensed matter physics grew out of solid-state physics, which is now considered one of its main subfields. The name of the field was apparently coined in 1967 by Philip Anderson and Volker Heine when they renamed their research group in the Cavendish Laboratory of the University of Cambridge from "Solid-State Theory" to "Theory of Condensed Matter". In 1978, the Division of Solid State Physics at the American Physical Society was renamed as the Division of Condensed Matter Physics. One of the reasons for this change is that many of the concepts and techniques developed for studying solids can also be applied to fluid systems. For instance, the conduction electrons in an electrical conductor form a Fermi liquid, with similar properties to conventional liquids made up of atoms or molecules. Even the phenomenon of superconductivity, in which the quantum-mechanical properties of the electrons lead to collective behavior fundamentally different from that of a classical fluid, is closely related to the superfluid phase of liquid helium.


Topics in condensed matter physics

    * Phases
          o Generic phases - Gas(* uncondensed); Liquid; Solid
          o Low temperature phases - Fermi gas; Fermi liquid; Fermionic condensate; Luttinger liquid; Superfluid; Composite fermions; Supersolid
          o Phase phenomena - Order parameter; Phase transition; Cooling curve
    * Interfaces
          o Surface tension
          o Domain growth - Nucleation; Spinodal decomposition
          o Interfacial growth - Dendritic growth; Solidification fronts; Viscous fingering
    * Crystalline solids
          o Types - Insulator; Metal; Semiconductor; Semimetal
          o Electronic properties - Band gap; Bloch wave; Conduction band; Effective mass (solid-state physics); Electrical conduction; Electron hole; Valence band
          o Electronic phenomena - Kondo effect; Plasmon; Quantum Hall effect; Superconductivity; Wigner crystal; Thermoelectricity
          o Lattice phenomena - Antiferromagnet; Ferroelectric effect; Ferromagnet; Magnon; Phonon; Spin glass; Topological defect; Multiferroics
    * Non-crystalline solids
          o Types -Amorphous solid; Granular matter; Quasicrystals
    * Soft condensed matter
          o Types - Liquid crystals; Polymers; Complex fluids; Gels; Foams; Emulsions; Colloids
    * Nanotechnology
          o Nanoelectromechanical Systems (NEMS)
          o Magnetic Resonance Force Microscopy
          o Heat Transport in Nanoscale Systems
          o Spin Transport

Computational physic

Computational physics is the study and implementation of numerical algorithms to solve problems in physicstheoretical physics but some consider it an intermediate branch between theoretical and experimental physics. for which a quantitative theory already exists. It is often regarded as a subdiscipline of
Physicists often have a very precise mathematical theory describing how a system will behave. Unfortunately, it is often the case that solving the theory's equations ab initio in order to produce a useful prediction is not practical. This is especially true with quantum mechanics, where only a handful of simple models have complete analytic solutions. In cases where the systems only have numerical solutions, computational methods are used.
Computation now represents an essential component of modern research in accelerator physics, astrophysics, fluid mechanics, lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling) and solid state physics. Computational solid state physics, for example, uses density functional theory to calculate properties of solids, a method similar to that used by chemists to study molecules.
Many other more general numerical problems fall loosely under the domain of computational physics, although they could easily be considered pure mathematics or part of any number of applied areas. These include

    Solving differential equations
    Evaluating integrals
    Stochastic methods, especially Monte Carlo methods
    Specialized partial differential equation methods, for example the finite difference method and the finite element method
    The matrix eigenvalue problem – the problem of finding eigenvalues of very large matrices, and their corresponding eigenvectors (eigenstates in quantum physics)
    The pseudo-spectral method

All these methods (and several others) are used to calculate physical properties of the modeled systems. Computational Physics also encompasses the tuning of the software/hardware structure to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).

Classical mechanics



In the fields of physics, classical mechanics is one of the two major sub-fields of study in the science of mechanics, which is concerned with the set of physical laws governing and mathematically describing the motions of bodies and aggregates of bodies geometrically distributed within a certain boundary under the action of a system of forces. The other sub-field is quantum mechanics.
Classical mechanics is used for describing the motion of macroscopic objects, from projectiles to parts of machinery, as well as astronomical objects, such as spacecraft, planets, stars, and galaxies. It produces very accurate results within these domains, and is one of the oldest and largest subjects in science, engineering and technology. Besides this, many related specialties exist that deal with gases, liquids, and solids, and so on. In addition, classical mechanics is enhanced by special relativity for high velocity objects that are approaching the speed of light. General relativity is employed to handle gravitation at a deeper level, and finally, quantum mechanics handles the wave-particle duality of atoms and molecules.
The term classical mechanics was coined in the early 20th century to describe the system of mathematical physics begun by Isaac Newton and many contemporary 17th century natural philosophers, building upon the earlier astronomical theories of Johannes Kepler, which in turn were based on the precise observations of Tycho Brahe and the studies of terrestrial projectile motion of Galileo, but before the development of quantum physics and relativity. Therefore, some sources exclude so-called "relativistic physics" from that category. However, a number of modern sources do include Einstein's mechanics, which in their view represents classical mechanics in its most developed and most accurate form
The initial stage in the development of classical mechanics is often referred to as Newtonian mechanics, and is associated with the physical concepts employed by and the mathematical methods invented by NewtonLeibniz, and others. This is further described in the following sections. More abstract and general methods include Lagrangian mechanics and Hamiltonian mechanics. Much of the content of classical mechanics was created in the 18th and 19th centuries and extends considerably beyond (particularly in its use of analytical mathematics) the work of Newton. himself, in parallel with

Description of the theory


The analysis of projectile motion is a part of classical mechanics.
The following introduces the basic concepts of classical mechanics. For simplicity, it often models real-world objects as point particles, objects with negligible size. The motion of a point particle is characterized by a small number of parameters: its position, mass, and the forces applied to it. Each of these parameters is discussed in turn.
In reality, the kind of objects which classical mechanics can describe always have a non-zero size. (The physics of very small particles, such as the electron, is more accurately described by quantum mechanics). Objects with non-zero size have more complicated behavior than hypothetical point particles, because of the additional degrees of freedom—for example, a baseball can spin while it is moving. However, the results for point particles can be used to study such objects by treating them as composite objects, made up of a large number of interacting point particles. The center of mass of a composite object behaves like a point particle.

Position and its derivatives

The SI derived "mechanical"
(that is, not electromagnetic or thermal)
units with kg, m and s
Position m
Angular position/Angle unitless (radian)
velocity m s−1
Angular velocity s−1
acceleration m s−2
Angular acceleration s−2
jerk m s−3
"Angular jerk" s−3
specific energy m2 s−2
absorbed dose rate m2 s−3
moment of inertia kg m2
momentum kg m s−1
angular momentum kg m2 s−1
force kg m s−2
torque kg m2 s−2
energy kg m2 s−2
power kg m2 s−3
pressure and energy density kg m−1 s−2
surface tension kg s−2
Spring constant kg s−2
irradiance and energy flux kg s−3
kinematic viscosity m2 s−1
dynamic viscosity kg m−1 s−1
Density(mass density) kg m−3
Density(weight density) kg m−2 s−2
Number density m−3
Action kg m2 s−1
The position of a point particle is defined with respect to an arbitrary fixed reference point, O, in space, usually accompanied by a coordinate system, with the reference point located at the origin of the coordinate system. It is defined as the vector r from O to the particle. In general, the point particle need not be stationary relative to O, so r is a function of t, the time elapsed since an arbitrary initial time. In pre-Einstein relativity (known as Galilean relativity), time is considered an absolute, i.e., the time interval between any given pair of events is the same for all observers. In addition to relying on absolute time, classical mechanics assumes Euclidean geometry for the structure of space.

Velocity and speed

The velocity, or the rate of change of position with time, is defined as the derivative of the position with respect to time or
\mathbf{v} = {\mathrm{d}\mathbf{r} \over \mathrm{d}t}\,\!.
In classical mechanics, velocities are directly additive and subtractive. For example, if one car traveling East at 60 km/h passes another car traveling East at 50 km/h, then from the perspective of the slower car, the faster car is traveling east at 60 − 50 = 10 km/h. Whereas, from the perspective of the faster car, the slower car is moving 10 km/h to the West. Velocities are directly additive as vector quantities; they must be dealt with using vector analysis.
Mathematically, if the velocity of the first object in the previous discussion is denoted by the vector u = ud and the velocity of the second object by the vector v = ve, where uv is the speed of the second object, and is the speed of the first object, d and e are unit vectors in the directions of motion of each particle respectively, then the velocity of the first object as seen by the second object is
\mathbf{u}' = \mathbf{u} - \mathbf{v} \, .
Similarly,
\mathbf{v'}= \mathbf{v} - \mathbf{u} \, .
When both objects are moving in the same direction, this equation can be simplified to
\mathbf{u}' = ( u - v ) \mathbf{d} \, .
Or, by ignoring direction, the difference can be given in terms of speed only:
u' = u - v \, .

Acceleration

The acceleration, or rate of change of velocity, is the derivative of the velocity with respect to time (the second derivative of the position with respect to time) or
\mathbf{a} = {\mathrm{d}\mathbf{v} \over \mathrm{d}t}.
Acceleration can arise from a change with time of the magnitude of the velocity or of the direction of the velocity or both. If only the magnitude v of the velocity decreases, this is sometimes referred to as deceleration, but generally any change in the velocity with time, including deceleration, is simply referred to as acceleration.

Frames of reference

While the position and velocity and acceleration of a particle can be referred to any observer in any state of motion, classical mechanics assumes the existence of a special family of reference frames in terms of which the mechanical laws of nature take a comparatively simple form. These special reference frames are called inertial frames. They are characterized by the absence of acceleration of the observer and the requirement that all forces entering the observer's physical laws originate in identifiable sources (charges, gravitational bodies, and so forth). A non-inertial reference frame is one accelerating with respect to an inertial one, and in such a non-inertial frame a particle is subject to acceleration by fictitious forces that enter the equations of motion solely as a result of its accelerated motion, and do not originate in identifiable sources. These fictitious forces are in addition to the real forces recognized in an inertial frame. A key concept of inertial frames is the method for identifying them. For practical purposes, reference frames that are unaccelerated with respect to the distant stars are regarded as good approximations to inertial frames.
Consider two reference frames S and S' . For observers in each of the reference frames an event has space-time coordinates of (x,y,z,t) in frame S and (x′,y′,z′,t′) in frame S′. Assuming time is measured the same in all reference frames, and if we require x = x' when t = 0, then the relation between the space-time coordinates of the same event observed from the reference frames S′ and S, which are moving at a relative velocity of u in the x direction is:
x′ = x − ut
y′ = y
z′ = z
t′ = t
This set of formulas defines a group transformation known as the Galilean transformation (informally, the Galilean transform). This group is a limiting case of the Poincaré group used in special relativity. The limiting case applies when the velocity u is very small compared to c, the speed of light.
The transformations have the following consequences:
  • v′ = v − u (the velocity v′ of a particle from the perspective of S′ is slower by u than its velocity v from the perspective of S)
  • a′ = a (the acceleration of a particle is the same in any inertial reference frame)
  • F′ = F (the force on a particle is the same in any inertial reference frame)
  • the speed of light is not a constant in classical mechanics, nor does the special position given to the speed of light in relativistic mechanics have a counterpart in classical mechanics.
For some problems, it is convenient to use rotating coordinates (reference frames). Thereby one can either keep a mapping to a convenient inertial frame, or introduce additionally a fictitious centrifugal force and Coriolis force.

Forces; Newton's second law

Newton was the first to mathematically express the relationship between force and momentum. Some physicists interpret Newton's second law of motion as a definition of force and mass, while others consider it to be a fundamental postulate, a law of nature. Either interpretation has the same mathematical consequences, historically known as "Newton's Second Law":
\mathbf{F} = {\mathrm{d}\mathbf{p} \over \mathrm{d}t} = {\mathrm{d}(m \mathbf{v}) \over \mathrm{d}t}.
The quantity mv is called the (canonical) momentum. The net force on a particle is thus equal to rate change of momentum of the particle with time. Since the definition of acceleration is a = dv/dt, the second law can be written in the simplified and more familiar form:
\mathbf{F} = m \mathbf{a} \, .
So long as the force acting on a particle is known, Newton's second law is sufficient to describe the motion of a particle. Once independent relations for each force acting on a particle are available, they can be substituted into Newton's second law to obtain an ordinary differential equation, which is called the equation of motion.
As an example, assume that friction is the only force acting on the particle, and that it may be modeled as a function of the velocity of the particle, for example:
\mathbf{F}_{\rm R} = - \lambda \mathbf{v} \, ,
where λ is a positive constant. Then the equation of motion is
- \lambda \mathbf{v} = m \mathbf{a} = m {\mathrm{d}\mathbf{v} \over \mathrm{d}t} \, .
This can be integrated to obtain
\mathbf{v} = \mathbf{v}_0 e^{- \lambda t / m}
where v0 is the initial velocity. This means that the velocity of this particle decays exponentially to zero as time progresses. In this case, an equivalent viewpoint is that the kinetic energy of the particle is absorbed by friction (which converts it to heat energy in accordance with the conservation of energy), slowing it down. This expression can be further integrated to obtain the position r of the particle as a function of time.
Important forces include the gravitational force and the Lorentz force for electromagnetism. In addition, Newton's third law can sometimes be used to deduce the forces acting on a particle: if it is known that particle A exerts a force F on another particle B, it follows that B must exert an equal and opposite reaction force, −F, on A. The strong form of Newton's third law requires that F and −F act along the line connecting A and B, while the weak form does not. Illustrations of the weak form of Newton's third law are often found for magnetic forces.

Work and energy

If a constant force F is applied to a particle that achieves a displacement Δr, the work done by the force is defined as the scalar product of the force and displacement vectors:
 W = \mathbf{F} \cdot \Delta \mathbf{r} \, .
More generally, if the force varies as a function of position as the particle moves from r1 to r2 along a path C, the work done on the particle is given by the line integral
 W = \int_C \mathbf{F}(\mathbf{r}) \cdot \mathrm{d}\mathbf{r} \, .
If the work done in moving the particle from r1 to r2 is the same no matter what path is taken, the force is said to be conservative. Gravity is a conservative force, as is the force due to an idealized spring, as given by Hooke's law. The force due to friction is non-conservative.
The kinetic energy Ek of a particle of mass m travelling at speed v is given by
E_k = \tfrac{1}{2}mv^2 \, .
For extended objects composed of many particles, the kinetic energy of the composite body is the sum of the kinetic energies of the particles.
The work-energy theorem states that for a particle of constant mass m the total work W done on the particle from position r1 to r2 is equal to the change in kinetic energy Ek of the particle:
W = \Delta E_k = E_{k,2} - E_{k,1} = \tfrac{1}{2}m\left(v_2^{\, 2} - v_1^{\, 2}\right) \, .
Conservative forces can be expressed as the gradient of a scalar function, known as the potential energy and denoted Ep:
\mathbf{F} = - \mathbf{\nabla} E_p \, .
If all the forces acting on a particle are conservative, and Ep is the total potential energy (which is defined as a work of involved forces to rearrange mutual positions of bodies), obtained by summing the potential energies corresponding to each force
\mathbf{F} \cdot \Delta \mathbf{r} = - \mathbf{\nabla} E_p \cdot \Delta \mathbf{s} = - \Delta E_p
 \Rightarrow - \Delta E_p = \Delta E_k \Rightarrow \Delta (E_k + E_p) = 0 \, .
This result is known as conservation of energy and states that the total energy,
\sum E = E_k + E_p \, .
is constant in time. It is often useful, because many commonly encountered forces are conservative.

Beyond Newton's Laws

Classical mechanics also includes descriptions of the complex motions of extended non-pointlike objects. Euler's laws provide extensions to Newton's laws in this area. The concepts of angular momentum rely on the same calculus used to describe one-dimensional motion. The Rocket equation extends the notion of rate of change of an object's momentum to include the effects of an object "losing mass".
There are two important alternative formulations of classical mechanics: Lagrangian mechanics and Hamiltonian mechanics. These, and other modern formulations, usually bypass the concept of "force", instead referring to other physical quantities, such as energy, for describing mechanical systems.
The expressions given above for momentum and kinetic energy are only valid when there is no significant electromagnetic contribution. In electromagnetism, Newton's second law for current-carrying wires breaks down unless one includes the electromagnetic field contribution to the momentum of the system as expressed by the Poynting vector divided by c2, where c is the speed of light in free space.

History



Some Greek philosophers of antiquity, among them Aristotle, founder of Aristotelian physics, may have been the first to maintain the idea that "everything happens for a reason" and that theoretical principles can assist in the understanding of nature. While to a modern reader, many of these preserved ideas come forth as eminently reasonable, there is a conspicuous lack of both mathematical theory and controlled experiment, as we know it. These both turned out to be decisive factors in forming modern science, and they started out with classical mechanics.
Some of the laws of mechanics were recognized at least as early as the time of Archimedes. The medieval “science of weights” (i.e., mechanics) owes much of its importance to the work of Jordanus de Nemore. In the Elementa super demonstrationem ponderum, he introduces the concept of “positional gravity” and the use of component forces. An early mathematical and experimental scientific method was introduced into mechanics in the 11th century by al-Biruni, who along with al-Khazini in the 12th century, unified statics and dynamics into the science of mechanics, and combined the fields of hydrostatics with dynamics to create the field of hydrodynamics. Concepts related to Newton's laws of motion were also enunciated by several other Muslim physicists during the Middle Ages. Early versions of the law of inertia, known as Newton's first law of motion, and the concept relating to momentum, part of Newton's second law of motion, were described by Ibn al-Haytham (Alhazen) and Avicenna. The proportionality between force and acceleration, an important principle in classical mechanics, was first stated by Abu'l-Barakat, and Ibn Bajjah also developed the concept of a reaction force. Theories on gravity were developed by Banū Mūsā, Alhazen, and al-Khazini. It is knownGalileo Galilei's mathematical treatment of acceleration and his concept of impetus grew out of earlier medieval analyses of motion, especially those of Avicenna, Ibn Bajjah, and Jean Buridan. that

Three stage Theory of impetus according to Albert of Saxony.
The first published causal explanation of the motions of planets was Johannes Kepler's Astronomia novaTycho Brahe's observations of the orbit of Mars, that the orbits were ellipses. This break with ancient thought was happening around the same time that Galilei was proposing abstract mathematical laws for the motion of objects. He may (or may not) have performed the famous experiment of dropping two cannon balls of different weights from the tower of Pisa, showing that they both hit the ground at the same time. The reality of this experiment is disputed, but, more importantly, he did carry out quantitative experiments by rolling balls on an inclined plane. His theory of accelerated motion derived from the results of such experiments, and forms a cornerstone of classical mechanics. published in 1609. He concluded, based on
As foundation for his principles of natural philosophy, Newton proposed three laws of motion: the law of inertia, his second law of acceleration (mentioned above), and the law of action and reaction; and hence laid the foundations for classical mechanics. Both Newton's second and third laws were given proper scientific and mathematical treatment in Newton's Philosophiæ Naturalis Principia Mathematica, which distinguishes them from earlier attempts at explaining similar phenomena, which were either incomplete, incorrect, or given little accurate mathematical expression. Newton also enunciated the principles of conservation of momentum and angular momentum. In Mechanics, Newton was also the first to provide the first correct scientific and mathematical formulation of gravity in Newton's law of universal gravitation. The combination of Newton's laws of motion and gravitation provide the fullest and most accurate description of classical mechanics. He demonstrated that these laws apply to everyday objects as well as to celestial objects. In particular, he obtained a theoretical explanation of Kepler's laws of motion of the planets.
Newton previously invented the calculus, of mathematics, and used it to perform the mathematical calculations. For acceptability, his book, the Principia, was formulated entirely in terms of the long established geometric methods, which were soon to be eclipsed by his calculus. However it was Leibniz who developed the notation of the derivative and integral preferred today.

Hamilton’s greatest contribution is perhaps the reformulation of Newtonian mechanics, now called Hamiltonian mechanics.
Newton, and most of his contemporaries, with the notable exception of Huygens, worked on the assumption that classical mechanics would be able to explain all phenomena, including light, in the form of geometric optics. Even when discovering the so-called Newton's rings (a wave interference phenomenon) his explanation remained with his own corpuscular theory of light.
After Newton, classical mechanics became a principal field of study in mathematics as well as physics. After Newton there were several re-formulations which progressively allowed a solution to be found to a far greater number of problems. The first notable re-formulation was in 1788 by Joseph Louis Lagrange. Lagrangian mechanics was in turn re-formulated in 1833 by William Rowan Hamilton.
Some difficulties were discovered in the late 19th century that could only be resolved by more modern physics. Some of these difficulties related to compatibility with electromagnetic theory, and the famous Michelson-Morley experiment. The resolution of these problems led to the special theory of relativity, often included in the term classical mechanics.
A second set of difficulties were related to thermodynamics. When combined with thermodynamics, classical mechanics leads to the Gibbs paradox of classical statistical mechanics, in which entropy is not a well-defined quantity. Black-body radiation was not explained without the introduction of quanta. As experiments reached the atomic level, classical mechanics failed to explain, even approximately, such basic things as the energy levels and sizes of atoms and the photo-electric effect. The effort at resolving these problems led to the development of quantum mechanics.
Since the end of the 20th century, the place of classical mechanics in physics has been no longer that of an independent theory. Emphasis has shifted to understanding the fundamental forces of nature as in the Standard model and its more modern extensions into a unified theory of everything.Classical mechanics is a theory for the study of the motion of non-quantum mechanical, low-energy particles in weak gravitational fields.
In the 21st century classical mechanics has been extended into the complex domain and complex classical mechanics exhibits behaviours very similar to quantum mechanics.

Limits of validity


Domain of validity for Classical Mechanics
Many branches of classical mechanics are simplifications or approximations of more accurate forms; two of the most accurate being general relativity and relativistic statistical mechanics. Geometric optics is an approximation to the quantum theory of light, and does not have a superior "classical" form.

The Newtonian approximation to special relativity

In special relativity, the momentum of a particle is given by
\mathbf{p} = \frac{m \mathbf{v}}{ \sqrt{1-v^2/c^2}} \, ,
where m is the particle's mass, v its velocity, and c is the speed of light.
If v is very small compared to c, v2/c2 is approximately zero, and so
\mathbf{p} \approx m\mathbf{v} \, .
Thus the Newtonian equation p = mv is an approximation of the relativistic equation for bodies moving with low speeds compared to the speed of light.
For example, the relativistic cyclotron frequency of a cyclotron, gyrotron, or high voltage magnetron is given by
f=f_c\frac{m_0}{m_0+T/c^2} \, ,
where fc is the classical frequency of an electron (or other charged particle) with kinetic energy T and (rest) mass m0 circling in a magnetic field. The (rest) mass of an electron is 511 keV. So the frequency correction is 1% for a magnetic vacuum tube with a 5.11 kV direct current accelerating voltage.

The classical approximation to quantum mechanics

The ray approximation of classical mechanics breaks down when the de Broglie wavelength is not much smaller than other dimensions of the system. For non-relativistic particles, this wavelength is
\lambda=\frac{h}{p}
where h is Planck's constant and p is the momentum.
Again, this happens with electrons before it happens with heavier particles. For example, the electrons used by Clinton Davisson and Lester Germer in 1927, accelerated by 54 volts, had a wave length of 0.167 nm, which was long enough to exhibit a single diffraction side lobe when reflecting from the face of a nickel crystalvacuum chamber, it would seem relatively easy to increase the angular resolution from around a radian to a milliradian and see quantum diffraction from the periodic patterns of integrated circuit computer memory. with atomic spacing of 0.215 nm. With a larger
More practical examples of the failure of classical mechanics on an engineering scale are conduction by quantum tunneling in tunnel diodes and very narrow transistor gates in integrated circuits.
Classical mechanics is the same extreme high frequency approximation as geometric optics. It is more often accurate because it describes particles and bodies with rest mass. These have more momentum and therefore shorter De Broglie wavelengths than massless particles, such as light, with the same kinetic energies.

Branches


Branches of mechanics
Classical mechanics was traditionally divided into three main branches:
  • Statics, the study of equilibrium and its relation to forces
  • Dynamics, the study of motion and its relation to forces
  • Kinematics, dealing with the implications of observed motions without regard for circumstances causing them
Another division is based on the choice of mathematical formalism:
  • Newtonian mechanics
  • Lagrangian mechanics
  • Hamiltonian mechanics
Alternatively, a division can be made by region of application:
  • Celestial mechanics, relating to stars, planets and other celestial bodies
  • Continuum mechanics, for materials which are modelled as a continuum, e.g., solids and fluids (i.e., liquids and gases).
  • Relativistic mechanics (i.e. including the special and general theories of relativity), for bodies whose speed is close to the speed of light.
  • Statistical mechanics, which provides a framework for relating the microscopic properties of individual atoms and molecules to the macroscopic or bulk thermodynamic properties of materials.