Infolinks

Monday 26 September 2011

Thermodynamics

thermodynamics (from the Greek θέρμη therme, meaning "heat" and δύναμις, dynamis,energy conversion between heat and mechanical work, and subsequently the macroscopic variables such as temperature, volume and pressure. meaning force) is the study of
Historically, thermodynamics developed out of a need to increase the efficiency of early steam engines, particularly through the work of French physicist Nicolas Léonard Sadi Carnot (1824) who believed that engine efficiency was the key that could help France win the Napoleonic Wars. The first to give a concise definition of the subject was Scottish physicist William Thomson who in 1854 stated that:
Thermo-dynamics is the subject of the relation of heat to forces acting between contiguous parts of bodies, and the relation of heat to electrical agency.
Two fields of thermodynamics emerged in the following decades. Statistical thermodynamics, or statistical mechanics, (1860) concerned itself with statistical predictions of the collective motion of particles from their microscopic behavior, while chemical thermodynamics (1873) studies the nature of the role of entropy in the process of chemical reaction

Introduction

The starting point for most thermodynamic considerations are the laws of thermodynamics, which postulate that energy can be exchanged between physical systems as heat or work. They also postulate the existence of a quantity named entropy, which can be defined for any isolated system that is in thermodynamic equilibrium.
In thermodynamics, interactions between large ensembles of objects are studied and categorized. Central to this are the concepts of system and surroundings. A system is composed of particles, whose average motions define its properties, which in turn are related to one another through equations of state. Properties can be combined to express internal energy and thermodynamic potentials, which are useful for determining conditions for equilibrium and spontaneous processes.
With these tools, thermodynamics can be used to describe how systems respond to changes in their environment. This can be applied to a wide variety of topics in science and engineering, such as engines, phase transitions, chemical reactions, transport phenomena, and even black holes. The results of thermodynamics are essential for other fields of physics and for chemistry, chemical engineering, aerospace engineering, mechanical engineering, cell biology, biomedical engineering, materials science, and economics, to name a few.
The present article is focused mainly on classical thermodynamics, which is concerned with systems in thermodynamic equilibrium. It is wise to distinguish classical thermodynamics from non-equilibrium thermodynamics, which is concerned with systems that are not in thermodynamic equilibrium.

Table of the thermodynamicists representative of the original eight founding schools of thermodynamics. The schools having the most-lasting effect in founding the modern versions of thermodynamics, being the Berlin school, particularly as established in Rudolf Clausius’s 1865 textbook The Mechanical Theory of Heat, which is the prototype of all modern textbooks on thermodynamics, the Vienna school, with the statistical mechanics of Ludwig Boltzmann, and the Gibbsian school at Yale University, American engineer Willard Gibbs' 1876 On the Equilibrium of Heterogeneous Substances launching chemical thermodynamics.

History

The history of thermodynamics as a scientific discipline generally begins with Otto von Guericke who, in 1650, built and designed the world's first vacuum pump and demonstrated a vacuum using his Magdeburg hemispheres. Guericke was driven to make a vacuum in order to disprove Aristotle's long-held supposition that 'nature abhors a vacuum'. Shortly after Guericke, the English physicist and chemist Robert Boyle had learned of Guericke's designs and, in 1656, in coordination with English scientist Robert Hooke, built an air pump.Using this pump, Boyle and Hooke noticed a correlation between pressure, temperature, and volume. In time, Boyle's Law was formulated, which states that pressure and volume are inversely proportional. Then, in 1679, based on these concepts, an associate of Boyle's named Denis Papin built a bone digester, which was a closed vessel with a tightly fitting lid that confined steam until a high pressure was generated.
Later designs implemented a steam release valve that kept the machine from exploding. By watching the valve rhythmically move up and down, Papin conceived of the idea of a piston and a cylinder engine. He did not, however, follow through with his design. Nevertheless, in 1697, based on Papin's designs, engineer Thomas Savery built the first engine, followed by Thomas Newcomen in 1712. Although these early engines were crude and inefficient, they attracted the attention of the leading scientists of the time. Their work led 127 years later to Sadi Carnot, the "father of thermodynamics", who, in 1824, published Reflections on the Motive Power of Fire, a discourse on heat, power, and engine efficiency. The paper outlined the basic energetic relations between the Carnot engine, the Carnot cycle, and motive power. It marked the start of thermodynamics as a modern science.
The term thermodynamics was coined by James Joule in 1849 to designate the science of relations between heat and power. By 1858, "thermo-dynamics", as a functional term, was used in William Thomson's paper An Account of Carnot's Theory of the Motive Power of Heat. The first thermodynamic textbook was written in 1859 by William Rankine, originally trained as a physicist and a civil and mechanical engineering professor at the University of Glasgow. The first and second laws of thermodynamics emerged simultaneously in the 1850s, primarily out of the works of William Rankine, Rudolf Clausius, and William Thomson (Lord Kelvin).
The foundations of statistical thermodynamics were set out by physicists such as James Clerk Maxwell, Ludwig Boltzmann, Max Planck, Rudolf Clausius and J. Willard Gibbs.
During the years 1873-76 the American mathematical physicist Josiah Willard Gibbs published a series of three papers, the most famous being On the Equilibrium of Heterogeneous Substances, in which he showed how thermodynamic processes could be graphically analyzed, by studying the energy, entropy, volume, temperature and pressure of the thermodynamic system in such a manner, one can determine if a process would occur spontaneously. During the early 20th century, chemists such as Gilbert N. Lewis, Merle Randall, and E. A. Guggenheim began to apply the mathematical methods of Gibbs to the analysis of chemical processes.

Interpretations

Thermodynamics has developed into several related branches of science, each with a different focus.

Classical thermodynamics

Classical thermodynamics is concerned with macroscopic thermodynamic states and properties of large, near-equilibrium systems. It is used to model exchanges of energy, work and heat based on the laws of thermodynamics. The term "classical" reflects the fact that it represents the level of knowledge in the early 1800s. An atomic interpretation of these principles was provided later by the development of statistical mechanics. Nonetheless, classical thermodynamics is still a practical and widely-used science.

Statistical mechanics

Statistical mechanics (or statistical thermodynamics) emerged only with the development of atomic and molecular theories in the late 1800s and early 1900s, giving thermodynamics a molecular interpretation. This field relates the microscopic properties of individual atoms and molecules to the macroscopic or bulk properties of materials that can be observed in everyday life, thereby explaining thermodynamics as a natural result of statistics and mechanics (classical and quantum) at the microscopic level. This statistical approach is in contrast to classical thermodynamics, which is a more phenomenological approach.

Chemical thermodynamics

Chemical thermodynamics is the study of the interrelation of energy with chemical reactions or with a physical change of state within the confines of the laws of thermodynamics.

Treatment of equilibrium

Equilibrium thermodynamics is the systematic study of transformations of matter and energy in systems as they approach equilibrium. The word equilibrium implies a state of balance. In an equilibrium state there are no unbalanced potentials, or driving forces, within the system. A central aim in equilibrium thermodynamics is: given a system in a well-defined initial state, subject to accurately specified constraints, to calculate what the state of the system will be once it has reached equilibrium.
Non-equilibrium thermodynamics is a branch of thermodynamics that deals with systems that are not in thermodynamic equilibrium. Most systems found in nature are not in thermodynamic equilibrium because they are not in stationary states, and are continuously and discontinuously subject to flux of matter and energy to and from other systems. The thermodynamic study of non-equilibrium systems requires more general concepts than are dealt with by equilibrium thermodynamics. Many natural systems still today remain beyond the scope of currently known macroscopic thermodynamic methods.

Laws of thermodynamics

Thermodynamics defines four laws which do not depend on the details of the systems under study or how they interact. Hence these laws are generally valid, can be applied to systems about which one knows nothing other than the balance of energy and matter transfer. Examples of such systems include Einstein's prediction of spontaneous emission, and ongoing research into the thermodynamics of black holes.
These four laws are:
  • Zeroth law of thermodynamics: If two systems are in thermal equilibrium with a third, they are also in thermal equilibrium with each other.
Systems are said to be in equilibrium if the small, random exchanges (due to Brownian motion, for example) between them do not lead to a net change in the total energy summed over all systems. This law is tacitly assumed in every measurement of temperature. Thus, if we want to know if two bodies are at the same temperature, it is not necessary to bring them into contact and to watch whether their observable properties change with time.
This law was considered so obvious it was added as a virtual afterthought, hence the designation Zeroth, rather than Fourth. In short, if the temperature of material A is equal to the temperature of material B, and B is equal to the temperature of material C, then A and C must also be equal. This implies that thermal equilibrium is an equivalence relation on the set of thermodynamic systems.
  • First law of thermodynamics: The internal energy of an isolated system is constant.
The first law of thermodynamics, an expression of the principle of conservation of energy, states that energy can be transformed (changed from one form to another), but cannot be created or destroyed.
It is usually formulated by saying that the change in the internal energy of a closed thermodynamic system is equal to the amount of heat supplied to the system, minus the amount of work done by the system on its surroundings. Work and heat are due to processes which add or subtract energy, while internal energy is a particular form of energy associated with the system. Internal energy is a property of the system whereas work done and heat supplied are not. A significant result of this distinction is that a given internal energy change can be achieved by many combinations of heat and work.
  • Second law of thermodynamics: Heat cannot spontaneously flow from a colder location to a hotter location.
The second law of thermodynamics is an expression of the universal principle of decay observable in nature. The second law is an observation of the fact that over time, differences in temperature, pressure, and chemical potential tend to even out in a physical system that is isolated from the outside world. Entropy is a measure of how much this evening-out process has progressed. The entropy of an isolated system which is not in equilibrium will tend to increase over time, approaching a maximum value at equilibrium.
In classical thermodynamics, the second law is a basic postulate applicable to any system involving heat energy transfer; in statistical thermodynamics, the second law is a consequence of the assumed randomness of molecular chaos. There are many versions of the second law, but they all have the same effect, which is to explain the phenomenon of irreversibility in nature.
  • Third law of thermodynamics: As a system approaches absolute zero, all processes cease and the entropy of the system approaches a minimum value.
The third law of thermodynamics is a statistical law of nature regarding entropy and the impossibility of reaching absolute zero of temperature. This law provides an absolute reference point for the determination of entropy. The entropy determined relative to this point is the absolute entropy. Alternate definitions are, "the entropy of all systems and of all states of a system is smallest at absolute zero," or equivalently "it is impossible to reach the absolute zero of temperature by any finite number of processes".
Absolute zero, at which all activity would stop if it were possible to happen, is −273.15 °C (degrees Celsius), or −459.67 °F (degrees Fahrenheit) or 0 K (kelvin).

System models


A diagram of a generic thermodynamic system
An important concept in thermodynamics is the thermodynamic system, a precisely defined region of the universe under study. Everything in the universe except the system is known as the surroundings. A system is separated from the remainder of the universe by a boundary which may be notional or not, but which by convention delimits a finite volume. Exchanges of work, heat, or matter between the system and the surroundings take place across this boundary.
In practice, the boundary is simply an imaginary dotted line drawn around a volume when there is going to be a change in the internal energy of that volume. Anything that passes across the boundary that effects a change in the internal energy needs to be accounted for in the energy balance equation. The volume can be the region surrounding a single atom resonating energy, such as Max Planck defined in 1900; it can be a body of steam or air in a steam engine, such as Sadi Carnot defined in 1824; it can be the body of a tropical cyclone, such as Kerry Emanuel theorized in 1986 in the field of atmospheric thermodynamics; it could also be just one nuclidequarks) as hypothesized in quantum thermodynamics. (i.e. a system of
Boundaries are of four types: fixed, moveable, real, and imaginary. For example, in an engine, a fixed boundary means the piston is locked at its position; as such, a constant volume process occurs. In that same engine, a moveable boundary allows the piston to move in and out. For closed systems, boundaries are real while for open system boundaries are often imaginary.
Generally, thermodynamics distinguishes five classes of systems, defined in terms of what is allowed to cross their boundaries:
Types of thermodynamic systems
Type of system Allows
Matter
Allows
Work
Allows
Heat
Open Green tick Green tick Green tick
Closed Red X Green tick Green tick
Isolated Red X Red X Red X
As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone to completion is considered to be in a state of thermodynamic equilibrium.
In thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than systems which are not in equilibrium. Often, when analysing a thermodynamic process, it can be assumed that each intermediate state in the process is at equilibrium. This will also considerably simplify the situation. Thermodynamic processes which develop so slowly as to allow each intermediate step to be an equilibrium state are said to be reversible processes.

States and processes

When a system is at equilibrium under a given set of conditions, it is said to be in a definite thermodynamic state. The state of the system can be described by a number of intensive variables and extensive variables. The properties of the system can be described by an equation of state which specifies the relationship between these variables. State may be thought of as the instantaneous quantitative description of a system with a set number of variables held constant.
A thermodynamic process may be defined as the energetic evolution of a thermodynamic system proceeding from an initial state to a final state. Typically, each thermodynamic process is distinguished from other processes in energetic character according to what parameters, such as temperature, pressure, or volume, etc., are held fixed. Furthermore, it is useful to group these processes into pairs, in which each variable held constant is one member of a conjugate pair.
Several common thermodynamic processes are:
  • Isobaric process: occurs at constant pressure
  • Isochoric process: occurs at constant volume (also called isometric/isovolumetric)
  • Isothermal process: occurs at a constant temperature
  • Adiabatic process: occurs without loss or gain of energy by heat
  • Isentropic process: a reversible adiabatic process, occurs at a constant entropy
  • Isenthalpic process: occurs at a constant enthalpy
  • Steady state process: occurs without a change in the internal energy

Instrumentation

There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample of an ideal gas at constant pressure. From the ideal gas law pV=nRT, the volume of such a sample can be used as an indicator of temperature; in this manner it defines temperature. Although pressure is defined mechanically, a pressure-measuring device, called a barometer may also be constructed from a sample of an ideal gas held at a constant temperature. A calorimeter is a device which is used to measure and define the internal energy of a system.
A thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system. It is used to impose a particular value of a state parameter upon the system. For example, a pressure reservoir is a system at a particular pressure, which imposes that pressure upon any test system that it is mechanically connected to. The Earth's atmosphere is often used as a pressure reservoir.

Conjugate variables

The central concept of thermodynamics is that of energy, the ability to do work. By the First Law, the total energy of a system and its surroundings is conserved. Energy may be transferred into a system by heating, compression, or addition of matter, and extracted from a system by cooling, expansion, or extraction of matter. In mechanics, for example, energy transfer equals the product of the force applied to a body and the resulting displacement.
Conjugate variables are pairs of thermodynamic concepts, with the first being akin to a "force" applied to some thermodynamic system, the second being akin to the resulting "displacement," and the product of the two equalling the amount of energy transferred. The common conjugate variables are:
  • Pressure-volume (the mechanical parameters);
  • Temperature-entropy (thermal parameters);
  • Chemical potential-particle number (material parameters).

Potentials

Thermodynamic potentials are different quantitative measures of the stored energy in a system. Potentials are used to measure energy changes in systems as they evolve from an initial state to a final state. The potential used depends on the constraints of the system, such as constant temperature or pressure. For example, the Helmholtz and Gibbs energies are the energies available in a system to do useful work when the temperature and volume or the pressure and temperature are fixed, respectively.
The five most well known potentials are:
Name Symbol Formula Natural variables
Internal energy U TS − pV + \sum_i\,μiNi S,V,{Ni}
Helmholtz free energy F, A U − TS T,V,{Ni}
Enthalpy H U + pV S,p,{Ni}
Gibbs free energy G U + pV − TS T,p,{Ni}
Landau Potential (Grand potential) Ω, ΦG U − TS − \sum_i\,μiNi T,V,{μi}
where T is the temperature, S the entropy, p the pressure, V the volume, μ the chemical potential, N the number of particles in the system, and i is the count of particles types in the system.
Thermodynamic potentials can be derived from the energy balance equation applied to a thermodynamic system. Other thermodynamic potentials can also be obtained through Legendre transformation.

Organic chemistry

Organic chemistry is a subdiscipline within chemistry involving the scientific study of the structure, properties, composition, reactions, and preparation (by synthesis or by other means) of carbon-based compounds, hydrocarbons, and their derivatives. These compounds may contain any number of other elements, including hydrogen, nitrogen, oxygen, the halogens as well as phosphorus, silicon and sulfur.
Organic compounds are structurally diverse. The range of application of organic compounds is enormous. They form the basis of, or are important constituents of many products (plastics, drugs, petrochemicals, food, explosives, paints, etc.) and, with very few exceptions, they form the basis of all earthly life processes.


History

Friedrich Wöhler
In the early nineteenth century, chemists generally believed that compounds obtained from living organisms were too complex to be obtained synthetically. According to the concept of vitalism, organic matter was endowed with a "vital force". They named these compounds "organic" and directed their investigations toward inorganic materials that seemed more easily studied.
During the first half of the nineteenth century, scientists realized that organic compounds can be synthesized in the laboratory. Around 1816 Michel Chevreul started a study of soaps made from various fats and alkalis. He separated the different acids that, in combination with the alkali, produced the soap. Since these were all individual compounds, he demonstrated that it was possible to make a chemical change in various fats (which traditionally come from organic sources), producing new compounds, without "vital force". In 1828 Friedrich Wöhler produced the organic chemical urea (carbamide), a constituent of urine, from the inorganic ammonium cyanate NH4OCN, in what is now called the Wöhler synthesis. Although Wöhler was always cautious about claiming that he had thereby destroyed the theory of vital force, historians have looked to this event as the turning point.
In 1856 William Henry Perkin, trying to manufacture quinine, again accidentally manufactured the organic dyePerkin's mauve. By generating a huge amount of money this discovery greatly increased interest in organic chemistry. now called
The crucial breakthrough for organic chemistry was the concept of chemical structure, developed independently and simultaneously by Friedrich August Kekule and Archibald Scott Couper in 1858. Both men suggested that tetravalent carbon atoms could link to each other to form a carbon lattice, and that the detailed patterns of atomic bonding could be discerned by skillful interpretations of appropriate chemical reactions.
The history of organic chemistry continued with the discovery of petroleum and its separation into fractionspetrochemical industry, which successfully manufactured artificial rubbers, the various organic adhesives, the property-modifying petroleum additives, and plastics. according to boiling ranges. The conversion of different compound types or individual compounds by various chemical processes created the petroleum chemistry leading to the birth of the
The pharmaceutical industry began in the last decade of the 19th century when acetylsalicylic acid (more commonly referred to as aspirin) manufacture was started in Germany by Bayer. The first time a drug was systematically improved was with arsphenamine (Salvarsan). Numerous derivatives of the dangerously toxic atoxyl were examined by Paul Ehrlich and his group, and the compound with best effectiveness and toxicity characteristics was selected for production.
Although early examples of organic reactions and applications were often serendipitous, the latter half of the 19th century witnessed highly systematic studies of organic compounds. Beginning in the 20th century, progress of organic chemistry allowed the synthesis of highly complex molecules via multistep procedures. Concurrently, polymers and enzymes were understood to be large organic molecules, and petroleum was shown to be of biological origin. The process of finding new synthesis routes for a given compound is called total synthesis. Total synthesis of complex natural compounds started with urea, increased in complexity to glucose and terpineol, and in 1907, total synthesis was commercialized the first time by Gustaf Komppa with camphor. Pharmaceutical benefits have been substantial, for example cholesterol-related compounds have opened ways to synthesis of complex human hormones and their modified derivatives. Since the start of the 20th century, complexity of total syntheses has been increasing, with examples such as lysergic acid and vitamin B12. Today's targets feature tens of stereogenic centers that must be synthesized correctly with asymmetric synthesis.
Biochemistry, the chemistry of living organisms, their structure and interactions in vitro and inside living systems, has only started in the 20th century, opening up a new chapter of organic chemistry with enormous scope. Biochemistry, like organic chemistry, primarily focuses on compounds containing carbon as well.

Characterization

Since organic compounds often exist as mixtures, a variety of techniques have also been developed to assess purity, especially important being chromatography techniques such as HPLC and gas chromatography. Traditional methods of separation include distillation, crystallization, and solvent extraction.
Organic compounds were traditionally characterized by a variety of chemical tests, called "wet methods," but such tests have been largely displaced by spectroscopic or other computer-intensive methods of analysis.Listed in approximate order of utility, the chief analytical methods are:
  • Nuclear magnetic resonance (NMR) spectroscopy is the most commonly used technique, often permitting complete assignment of atom connectivity and even stereochemistry using correlation spectroscopy. The principal constituent atoms of organic chemistry - hydrogen and carbon - exist naturally with NMR-responsive isotopes, respectively 1H and 13C.
  • Elemental analysis: A destructive method used to determine the elemental composition of a molecule. See also mass spectrometry, below.
  • Mass spectrometry indicates the molecular weight of a compound and, from the fragmentation patterns, its structure. High resolution mass spectrometry can usually identify the exact formula of a compound and is used in lieu of elemental analysis. In former times, mass spectrometry was restricted to neutral molecules exhibiting some volatility, but advanced ionization techniques allow one to obtain the "mass spec" of virtually any organic compound.
  • Crystallography is an unambiguous method for determining molecular geometry, the proviso being that single crystals of the material must be available and the crystal must be representative of the sample. Highly automated software allows a structure to be determined within hours of obtaining a suitable crystal.
Traditional spectroscopic methods such as infrared spectroscopy, optical rotation, UV/VIS spectroscopy provide relatively nonspecific structural information but remain in use for specific classes of compounds.
Additional methods are described in the article on analytical chemistry.

Properties

Physical properties of organic compounds typically of interest include both quantitative and qualitative features. Quantitative information include melting point, boiling point, and index of refraction. Qualitative properties include odor, consistency, solubility, and color.

Melting and boiling properties

In contrast to many inorganic materials, organic compounds typically melt and many boil. In earlier times, the melting point (m.p.) and boiling point (b.p.) provided crucial information on the purity and identity of organic compounds. The melting and boiling points correlate with the polarity of the molecules and their molecular weight. Some organic compounds, especially symmetrical ones, sublime, that is they evaporate without melting. A well known example of a sublimable organic compound is para-dichlorobenzene, the odiferous constituent of mothballs. Organic compounds are usually not very stable at temperatures above 300 °C, although some exceptions exist.

Solubility

Neutral organic compounds tend to be hydrophobic, that is they are less soluble in water than in organic solvents. Exceptions include organic compounds that contain ionizable groups as well as low molecular weightalcohols, amines, and carboxylic acids where hydrogen bonding occurs. Organic compounds tend to dissolve in organic solvents. Solvents can be either pure substances like ether or ethyl alcohol, or mixtures, such as the paraffinic solvents such as the various petroleum ethers and white spirits, or the range of pure or mixed aromatic solvents obtained from petroleum or tar fractions by physical separation or by chemical conversion. Solubility in the different solvents depends upon the solvent type and on the functional groups if present.

Solid state properties

Various specialized properties are of interest depending on applications, e.g. thermo-mechanical and electro-mechanical such as piezoelectricity, electrical conductivity, and electro-optical (e.g. non-linear optics) properties. For historical reasons, such properties are mainly the subjects of the areas of polymer science and materials science.

Nomenclature

Various names and depictions for one organic compound.‎
The names of organic compounds are either systematic, following logically from a set of rules, or nonsystematic, following various traditions. Systematic nomenclature is stipulated by recommendations from IUPAC. Systematic nomenclature starts with the name for a parent structure within the molecule of interest. This parent name is then modified by prefixes, suffixes, and numbers to unambiguously convey the structure. Given that millions of organic compounds are known, rigorous use of systematic names can be cumbersome. Thus, IUPAC recommendations are more closely followed for simple compounds, but not complex molecules. To use the systematic naming, one must know the structures and names of the parent structures. Parent structures include unsubstituted hydrocarbons, heterocycles, and monofunctionalized derivatives thereof.
Nonsystematic nomenclature is simpler and unambiguous, at least to organic chemists. Nonsystematic names do not indicate the structure of the compound. Nonsystematic names are common for complex molecules, which includes most natural products. Thus, the informally named lysergic acid diethylamide is systematically named (6aR,9R)-N,N-diethyl-7-methyl-4,6,6a,7,8,9-hexahydroindolo-[4,3-fg] quinoline-9-carboxamide.
With the increased use of computing, other naming methods have evolved that are intended to be interpreted by machines. Two popular formats are SMILES and InChI.

Structural drawings

Organic molecules are described more commonly by drawings or structural formulas, combinations of drawings and chemical symbols. The line-angle formula is simple and unambiguous. In this system, the endpoints and intersections of each line represent one carbon, and hydrogen atoms can either be notated explicitly or assumed to be present as implied by tetravalent carbon. The depiction of organic compounds with drawings is greatly simplified by the fact that carbon in almost all organic compounds has four bonds, oxygen two, hydrogen one, and nitrogen three.

Classification of organic compounds

Functional groups

The family of carboxylic acids contains a carboxyl (-COOH) functional group. Acetic acid is an example.
The concept of functional groups is central in organic chemistry, both as a means to classify structures and for predicting properties. A functional group is a molecular module, and the reactivity of that functional group is assumed, within limits, to be the same in a variety of molecules. Functional groups can have decisive influence on the chemical and physical properties of organic compounds. Molecules are classified on the basis of their functional groups. Alcohols, for example, all have the subunit C-O-H. All alcohols tend to be somewhat hydrophilic, usually form esters, and usually can be converted to the corresponding halides. Most functional groups feature heteroatoms (atoms other than C and H). Organic compounds are classified according to functional groups, alcohols, carboxylic acids, amines, etc.

Aliphatic compounds

The aliphatic hydrocarbons are subdivided into three groups of homologous series according to their state of saturation:
  • paraffins, which are alkanes without any double or triple bonds,
  • olefins or alkenes which contain one or more double bonds, i.e. di-olefins (dienes) or poly-olefins.
  • alkynes, which have one or more triple bonds.
The rest of the group is classed according to the functional groups present. Such compounds can be "straight-chain," branched-chain or cyclic. The degree of branching affects characteristics, such as the octane numbercetane number in petroleum chemistry. or
Both saturated (alicyclic) compounds and unsaturated compounds exist as cyclic derivatives. The most stable rings contain five or six carbon atoms, but large rings (macrocycles) and smaller rings are common. The smallest cycloalkane family is the three-membered cyclopropane ((CH2)3). Saturated cyclic compounds contain single bonds only, whereas aromatic rings have an alternating (or conjugated) double bond. Cycloalkanes do not contain multiple bonds, whereas the cycloalkenes and the cycloalkynes do.

Aromatic compounds

Benzene is one of the best-known aromatic compounds as it is one of the simplest and most stable aromatics.
Aromatic hydrocarbons contain conjugated double bonds. The most important example is benzene, the structure of which was formulated by Kekulé who first proposed the delocalization or resonance principle for explaining its structure. For "conventional" cyclic compounds, aromaticity is conferred by the presence of 4n + 2 delocalized pi electrons, where n is an integer. Particular instability (antiaromaticity) is conferred by the presence of 4n conjugated pi electrons.

Heterocyclic compounds

The characteristics of the cyclic hydrocarbons are again altered if heteroatoms are present, which can exist as either substituents attached externally to the ring (exocyclic) or as a member of the ring itself (endocyclic). In the case of the latter, the ring is termed a heterocycle. Pyridine and furan are examples of aromatic heterocycles while piperidine and tetrahydrofuran are the corresponding alicyclic heterocycles. The heteroatom of heterocyclic molecules is generally oxygen, sulfur, or nitrogen, with the latter being particularly common in biochemical systems.
Examples of groups among the heterocyclics are the aniline dyes, the great majority of the compounds discussed in biochemistry such as alkaloids, many compounds related to vitamins, steroids, nucleic acids (e.g. DNA, RNA) and also numerous medicines. Heterocyclics with relatively simple structures are pyrrole (5-membered) and indole (6-membered carbon ring).
Rings can fuse with other rings on an edge to give polycyclic compounds. The purine nucleoside bases are notable polycyclic aromatic heterocycles. Rings can also fuse on a "corner" such that one atom (almost always carbon) has two bonds going to one ring and two to another. Such compounds are termed spiro and are important in a number of natural products.

Polymers

This swimming board is made of polystyrene, an example of a polymer
One important property of carbon is that it readily forms chain or even networks linked by carbon-carbon bonds. The linking process is called polymerization, and the chains or networks polymers, while the source compound is a monomer. Two main groups of polymers exist: those artificially manufactured are referred to as industrial polymers.For synthetic polymers and those naturally occurring as biopolymers.
Since the invention of the first artificial polymer, bakelite, the family has quickly grown with the invention of others. Common synthetic organic polymers are polyethylene (polythene), polypropylene, nylon, teflonpolystyrene, polyesters, polymethylmethacrylate (called perspex and plexiglas), and polyvinylchloriderubber are polymers. (PTFE), (PVC). Both synthetic and natural
The examples are generic terms, and many varieties of each of these may exist, with their physical characteristics fine tuned for a specific use. Changing the conditions of polymerisation changes the chemical composition of the product by altering chain length, or branching, or the tacticity. With a single monomer as a start the product is a homopolymer. Further, secondary component(s) may be added to create a heteropolymer (co-polymer) and the degree of clustering of the different components can also be controlled. Physical characteristics, such as hardness, density, mechanical or tensile strength, abrasion resistance, heat resistance, transparency, colour, etc. will depend on the final composition.

Biomolecules

Maitotoxin, a complex organic biological toxin.
Biomolecular chemistry is a major category within organic chemistry which is frequently studied by biochemists. Many complex multi-functional group molecules are important in living organisms. Some are long-chain biopolymers, and these include peptides, DNA, RNA and the polysaccharides such as starches in animals and celluloses in plants. The other main classes are amino acids (monomer building blocks of peptides and proteins), carbohydrates (which includes the polysaccharides), the nucleic acids (which include DNA and RNA as polymers), and the lipids. In addition, animal biochemistry contains many small molecule intermediates which assist in energy production through the Krebs cycle, and produces isoprene, the most common hydrocarbon in animals. Isoprenes in animals form the important steroid structural (cholesterol) and steroid hormone compounds; and in plants form terpenes, terpenoids, some alkaloids, and a unique set of hydrocarbons called biopolymer polyisoprenoids present in latex sap, which is the basis for making rubber.

Small molecules

In pharmacology, an important group of organic compounds is small molecules, also referred to as 'small organic compounds'. In this context, a small molecule is a small organic compound that is biologically active, but is not a polymer. In practice, small molecules have a molar mass less than approximately 1000 g/mol.
Molecular models of caffeine

Fullerenes

Fullerenes and carbon nanotubes, carbon compounds with spheroidal and tubular structures, have stimulated much research into the related field of materials science.

Others

Organic compounds containing bonds of carbon to nitrogen, oxygen and the halogens are not normally grouped separately. Others are sometimes put into major groups within organic chemistry and discussed under titles such as organosulfur chemistry, organometallic chemistry, organophosphorus chemistry and organosilicon chemistry.

Organic synthesis

A synthesis designed by E.J. Corey for oseltamivir (Tamiflu). This synthesis has 11 distinct reactions.
Synthetic organic chemistry is an applied science as it borders engineering, the "design, analysis, and/or construction of works for practical purposes". Organic synthesis of a novel compound is a problem solving task, where a synthesis is designed for a target molecule by selecting optimal reactions from optimal starting materials. Complex compounds can have tens of reaction steps that sequentially build the desired molecule. The synthesis proceeds by utilizing the reactivity of the functional groups in the molecule. For example, a carbonyl compound can be used as a nucleophile by converting it into an enolate, or as an electrophile; the combination of the two is called the aldol reaction. Designing practically useful syntheses always requires conducting the actual synthesis in the laboratory. The scientific practice of creating novel synthetic routes for complex molecules is called total synthesis.
There are several strategies to design a synthesis. The modern method of retrosynthesis, developed by E.J. Corey, starts with the target molecule and splices it to pieces according to known reactions. The pieces, or the proposed precursors, receive the same treatment, until available and ideally inexpensive starting materials are reached. Then, the retrosynthesis is written in the opposite direction to give the synthesis. A "synthetic tree" can be constructed, because each compound and also each precursor has multiple syntheses.

Organic reactions

Organic reactions are chemical reactions involving organic compounds. While pure hydrocarbons undergo certain limited classes of reactions, many more reactions which organic compounds undergo are largely determined by functional groups. The general theory of these reactions involves careful analysis of such properties as the electron affinity of key atoms, bond strengths and steric hindrance. These issues can determine the relative stability of short-lived reactive intermediates, which usually directly determine the path of the reaction.
The basic reaction types are: addition reactions, elimination reactions, substitution reactions, pericyclic reactions, rearrangement reactions and redox reactions. An example of a common reaction is a substitution reaction written as:
Nu + C-X → C-Nu + X
where X is some functional group and Nu is a nucleophile.
The number of possible organic reactions is basically infinite. However, certain general patterns are observed that can be used to describe many common or useful reactions. Each reaction has a stepwise reaction mechanism that explains how it happens in sequence—although the detailed description of steps is not always clear from a list of reactants alone.
The stepwise course of any given reaction mechanism can be represented using arrow pushing techniques in which curved arrows are used to track the movement of electrons as starting materials transition through intermediates to final products.

Saturday 24 September 2011

Spectroscopy

Spectroscopy was originally the study of the interaction between radiation and matter as a function of wavelength (λ). Historically, spectroscopy referred to the use of visible light dispersed according to its wavelength, e.g. by a prism. Later the concept was expanded greatly to comprise any measurement of a quantity as a function of either wavelength or frequency. Thus, it also can refer to a response to an alternating field or varying frequency (ν). A further extension of the scope of the definition added energy (E) as a variable, once the very close relationship E = hν for photons was realized (h is the Planck constant). A plot of the response as a function of wavelength—or more commonly frequency—is referred to as a spectrum; see also spectral linewidth.
Spectrometry is the spectroscopic technique used to assess the concentration or amount of a given chemical (atomic, molecular, or ionic) species. In this case, the instrument that performs such measurements is a spectrometer, spectrophotometer, or spectrograph.
Spectroscopy/spectrometry is often used in physical and analytical chemistry for the identification of substances through the spectrum emitted from or absorbed by them.
Spectroscopy/spectrometry is also heavily used in astronomy and remote sensing. Most large telescopes have spectrometers, which are used either to measure the chemical composition and physical properties of astronomical objects or to measure their velocities from the Doppler shift of their spectral lines.

Classification of methods

Nature of excitation measured

The type of spectroscopy depends on the physical quantity measured. Normally, the quantity that is measured is an intensity, of energy either absorbed or produced.
  • Electromagnetic spectroscopy involves interactions of matter with electromagnetic radiation, such as light.
  • Electron spectroscopy involves interactions with electron beams. Auger spectroscopy involves inducing the Auger effect with an electron beam. In this case the measurement typically involves the kinetic energy of the electron as variable.
  • Acoustic spectroscopy involves the frequency of sound.
  • Dielectric spectroscopy involves the frequency of an external electrical field
  • Mechanical spectroscopy involves the frequency of an external mechanical stress, e.g. a torsion applied to a piece of material.

Measurement process

Most spectroscopic methods are differentiated as either atomic or molecular based on whether or not they apply to atoms or molecules. Along with that distinction, they can be classified on the nature of their interaction:
  • Absorption spectroscopy uses the range of the electromagnetic spectra in which a substance absorbs. This includes atomic absorption spectroscopy and various molecular techniques, such as infrared, ultraviolet-visible and microwave spectroscopy.
  • Emission spectroscopy uses the range of electromagnetic spectra in which a substance radiates (emits). The substance first must absorb energy. This energy can be from a variety of sources, which determines the name of the subsequent emission, like luminescence. Molecular luminescence techniques include spectrofluorimetry.
  • Scattering spectroscopy measures the amount of light that a substance scatters at certain wavelengths, incident angles, and polarization angles. One of the most useful applications of light scattering spectroscopy is Raman spectroscopy.

Common types

Absorption

Absorption spectroscopy is a technique in which the power of a beam of light measured before and after interaction with a sample is compared. Specific absorption techniques tend to be referred to by the wavelength of radiation measured such as ultraviolet, infrared or microwave absorption spectroscopy. Absorption occurs when the energy of the photons matches the energy difference between two states of the material.

Fluorescence


Spectrum of light from a fluorescent lamp showing prominent mercury peaks
Fluorescence spectroscopy uses higher energy photons to excite a sample, which will then emit lower energy photons. This technique has become popular for its biochemical and medical applications, and can be used for confocal microscopy, fluorescence resonance energy transfer, and fluorescence lifetime imaging.

X-ray

When X-rays of sufficient frequency (energy) interact with a substance, inner shell electrons in the atom are excited to outer empty orbitals, or they may be removed completely, ionizing the atom. The inner shell "hole" will then be filled by electrons from outer orbitals. The energy available in this de-excitation process is emitted as radiation (fluorescence) or will remove other less-bound electrons from the atom (Auger effect). The absorption or emission frequencies (energies) are characteristic of the specific atom. In addition, for a specific atom, small frequency (energy) variations that are characteristic of the chemical bonding occur. With a suitable apparatus, these characteristic X-ray frequencies or Auger electron energies can be measured. X-ray absorption and emission spectroscopy is used in chemistry and material sciences to determine elemental composition and chemical bonding.
X-ray crystallography is a scattering process; crystalline materials scatter X-rays at well-defined angles. If the wavelength of the incident X-rays is known, this allows calculation of the distances between planes of atoms within the crystal. The intensities of the scattered X-rays give information about the atomic positions and allow the arrangement of the atoms within the crystal structure to be calculated. However, the X-ray light is then not dispersed according to its wavelength, which is set at a given value, and X-ray diffraction is thus not a spectroscopy.

Flame technique

Liquid solution samples are aspirated into a burner or nebulizer/burner combination, desolvated, atomized, and sometimes excited to a higher energy electronic state. The use of a flame during analysis requires fuel and oxidant, typically in the form of gases. Common fuel gases used are acetylene (ethyne) or hydrogen. Common oxidant gases used are oxygen, air, or nitrous oxide. These methods are often capable of analyzing metallic element analytes in the part per million, billion, or possibly lower concentration ranges. Light detectors are needed to detect light with the analysis information coming from the flame.
  • Atomic Emission Spectroscopy - This method uses flame excitation; atoms are excited from the heat of the flame to emit light. This method commonly uses a total consumption burner with a round burning outlet. A higher temperature flame than atomic absorption spectroscopy (AA) is typically used to produce excitation of analyte atoms. Since analyte atoms are excited by the heat of the flame, no special elemental lamps to shine into the flame are needed. A high resolution polychromator can be used to produce an emission intensity vs. wavelength spectrum over a range of wavelengths showing multiple element excitation lines, meaning multiple elements can be detected in one run. Alternatively, a monochromator can be set at one wavelength to concentrate on analysis of a single element at a certain emission line. Plasma emission spectroscopy is a more modern version of this method. See Flame emission spectroscopy for more details.
  • Atomic absorption spectroscopy (often called AA) - This method commonly uses a pre-burner nebulizer (or nebulizing chamber) to create a sample mist and a slot-shaped burner that gives a longer pathlength flame. The temperature of the flame is low enough that the flame itself does not excite sample atoms from their ground state. The nebulizer and flame are used to desolvate and atomize the sample, but the excitation of the analyte atoms is done by the use of lamps shining through the flame at various wavelengths for each type of analyte. In AA, the amount of light absorbed after going through the flame determines the amount of analyte in the sample. A graphite furnace for heating the sample to desolvate and atomize is commonly used for greater sensitivity. The graphite furnace method can also analyze some solid or slurry samples. Because of its good sensitivity and selectivity, it is still a commonly used method of analysis for certain trace elements in aqueous (and other liquid) samples.
  • Atomic Fluorescence Spectroscopy - This method commonly uses a burner with a round burning outlet. The flame is used to solvate and atomize the sample, but a lamp shines light at a specific wavelength into the flame to excite the analyte atoms in the flame. The atoms of certain elements can then fluoresce emitting light in a different direction. The intensity of this fluorescing light is used for quantifying the amount of analyte element in the sample. A graphite furnace can also be used for atomic fluorescence spectroscopy. This method is not as commonly used as atomic absorption or plasma emission spectroscopy.
Plasma Emission Spectroscopy In some ways similar to flame atomic emission spectroscopy, it has largely replaced it.
  • Direct-current plasma (DCP)
A direct-current plasma (DCP) is created by an electrical discharge between two electrodes. A plasma support gas is necessary, and Ar is common. Samples can be deposited on one of the electrodes, or if conducting can make up one electrode.
  • Glow discharge-optical emission spectrometry (GD-OES)
  • Inductively coupled plasma-atomic emission spectrometry (ICP-AES)
  • Laser Induced Breakdown Spectroscopy (LIBS) (LIBS), also called Laser-induced plasma spectrometry (LIPS)
  • Microwave-induced plasma (MIP)
Spark or arc (emission) spectroscopy - is used for the analysis of metallic elements in solid samples. For non-conductive materials, a sample is ground with graphite powder to make it conductive. In traditional arc spectroscopy methods, a sample of the solid was commonly ground up and destroyed during analysis. An electric arc or spark is passed through the sample, heating the sample to a high temperature to excite the atoms in it. The excited analyte atoms glow, emitting light at various wavelengths that could be detected by common spectroscopic methods. Since the conditions producing the arc emission typically are not controlled quantitatively, the analysis for the elements is qualitative. Nowadays, the spark sources with controlled discharges under an argon atmosphere allow that this method can be considered eminently quantitative, and its use is widely expanded worldwide through production control laboratories of foundries and steel mills.

Visible

Many atoms emit or absorb visible light. In order to obtain a fine line spectrum, the atoms must be in a gas phase. This means that the substance has to be vaporised. The spectrum is studied in absorption or emission. Visible absorption spectroscopy is often combined with UV absorption spectroscopy in UV/Vis spectroscopy. Although this form may be uncommon as the human eye is a similar indicator, it still proves useful when distinguishing colours.

Ultraviolet

All atoms absorb in the Ultraviolet (UV) region because these photons are energetic enough to excite outer electrons. If the frequency is high enough, photoionization takes place. UV spectroscopy is also used in quantifying protein and DNA concentration as well as the ratio of protein to DNA concentration in a solution. Several amino acids usually found in protein, such as tryptophan, absorb light in the 280 nm range and DNA absorbs light in the 260 nm range. For this reason, the ratio of 260/280 nm absorbance is a good general indicator of the relative purity of a solution in terms of these two macromolecules. Reasonable estimates of protein or DNA concentration can also be made this way using Beer's law.

Infrared

Infrared spectroscopy offers the possibility to measure different types of inter atomic bond vibrations at different frequencies. Especially in organic chemistry the analysis of IR absorption spectra shows what type of bonds are present in the sample. It is also an important method for analysing polymers and constituents like fillers, pigments and plasticizers.

Near Infrared (NIR)

The near infrared NIR range, immediately beyond the visible wavelength range, is especially important for practical applications because of the much greater penetration depth of NIR radiation into the sample than in the case of mid IR spectroscopy range. This allows also large samples to be measured in each scan by NIR spectroscopy, and is currently employed for many practical applications such as: rapid grain analysis, medical diagnosis pharmaceuticals/medicines, biotechnology, genomics analysis, proteomic analysis, interactomicschemical imaging/hyperspectral imaging of intact organisms, plastics, textiles, insect detection, forensic lab application, crime detection and various military applications. Interpretation of Near-Infrared Spectra is important for chemical identification. research, inline textile monitoring, food analysis and

Raman

Raman spectroscopy uses the inelastic scattering of light to analyse vibrational and rotational modes of molecules. The resulting 'fingerprints' are an aid to analysis.

Coherent anti-Stokes Raman spectroscopy (CARS)

CARS is a recent technique that has high sensitivity and powerful applications for in vivo spectroscopy and imaging.

Nuclear magnetic resonance

Nuclear magnetic resonance spectroscopy analyzes the magnetic properties of certain atomic nuclei to determine different electronic local environments of hydrogen, carbon, or other atoms in an organic compoundcompound. This is used to help determine the structure of the compound. or other

Photoemission

Mössbauer

Transmission or conversion-electron (CEMS) modes of Mössbauer spectroscopy probe the properties of specific isotope nuclei in different atomic environments by analyzing the resonant absorption of characteristic energy gamma-rays known as the Mössbauer effect.

Other types

There are many different types of materials analysis techniques under the broad heading of "spectroscopy", utilizing a wide variety of different approaches to probing material properties, such as absorbance, reflection, emission, scattering, thermal conductivity, and refractive index.
  • Acoustic spectroscopy
  • Auger spectroscopy is a method used to study surfaces of materials on a micro-scale. It is often used in connection with electron microscopy.
  • Cavity ring down spectroscopy
  • Circular Dichroism spectroscopy
  • Deep-level transient spectroscopy measures concentration and analyzes parameters of electrically active defects in semiconducting materials
  • Dielectric spectroscopy
  • Dual polarisation interferometry measures the real and imaginary components of the complex refractive index
  • Force spectroscopy
  • Fourier transform spectroscopy is an efficient method for processing spectra data obtained using interferometers. Nearly all infrared spectroscopy techniques (such as FTIR) and nuclear magnetic resonance (NMR) are based on Fourier transforms.
  • Fourier transform infrared spectroscopy (FTIR)
  • Hadron spectroscopy studies the energy/mass spectrum of hadrons according to spin, parity, and other particle properties. Baryon spectroscopy and meson spectroscopy are both types of hadron spectroscopy.
  • Inelastic electron tunneling spectroscopy (IETS) uses the changes in current due to inelastic electron-vibration interaction at specific energies that can also measure optically forbidden transitions.
  • Inelastic neutron scattering is similar to Raman spectroscopy, but uses neutrons instead of photons.
  • Laser spectroscopy uses tunable lasers and other types of coherent emission sources, such as optical parametric oscillators, for selective excitation of atomic or molecular species.
    • Ultra fast laser spectroscopy
  • Mechanical spectroscopy involves interactions with macroscopic vibrations, such as phonons. An example is acoustic spectroscopy, involving sound waves.
  • Neutron spin echo spectroscopy measures internal dynamics in proteins and other soft matter systems
  • Nuclear magnetic resonance (NMR)
  • Photoacoustic spectroscopy measures the sound waves produced upon the absorption of radiation.
  • Photothermal spectroscopy measures heat evolved upon absorption of radiation.
  • Raman optical activity spectroscopy exploits Raman scattering and optical activity effects to reveal detailed information on chiral centers in molecules.
  • Terahertz spectroscopy uses wavelengths above infrared spectroscopy and below microwave or millimeter wave measurements.
  • Time-resolved spectroscopy is the spectroscopy of matter in situations where the properties are changing with time.
  • Thermal infrared spectroscopy measures thermal radiation emitted from materials and surfaces and is used to determine the type of bonds present in a sample as well as their lattice environment. The techniques are widely used by organic chemists, mineralogists, and planetary scientists.

Background subtraction

Background subtraction is a term typically used in spectroscopy with priscilla when one explains the process of acquiring a background radiation level (or ambient radiation level) and then makes an algorithmic adjustment to the data to obtain qualitative information about any deviations from the background, even when they are an order of magnitude less decipherable than the background itself.
Background subtraction can affect a number of statistical calculations (Continuum, Compton, Bremsstrahlung) leading to improved overall system performance.

Applications

  • Estimate weathered wood exposure times using Near infrared spectroscopy.
  • Cure monitoring of composites using Optical fibers
  • AGFHXYAT4DZG

Thermochemistry

In thermodynamics and physical chemistry, thermochemistry is the study of the energy evolved or absorbed in chemical reactions and any physical transformations, such as melting and boiling. Thermochemistry, generally, is concerned with the energy exchange accompanying transformations, such as mixing, phase transitions, chemical reactions, and including calculations of such quantities as the heat capacity, heat of combustion, heat of formation, enthalpy, and free energy.

History

Thermochemistry rests on two generalizations. Stated in modern terms, they are as follows:

   1. Lavoisier and Laplace’s law (1780): The energy change accompanying any transformation is equal and opposite to energy change accompanying the reverse process.
   2. Hess's law (1840): The energy change accompanying any transformation is the same whether the process occurs in one step or many.

These statements preceded the first law of thermodynamics (1845) and helped in its formulation.

Edward Diaz and Hess also investigated specific heat and latent heat, although it was Joseph Black who made the most important contributions to the development of latent energy changes.

Gustav Kirchhoff showed in 1858 that the variation of the heat of reaction is given by the difference in heat capacity between products and reactants: dΔH / dT = ΔCp. Integration of this equation permits the evaluation of the heat of reaction at one temperature from measurements at another temperature.
Calorimetry

The measurement of heat changes is performed using calorimetry, usually an enclosed chamber within which the change to be examined occurs. The temperature of the chamber is monitored either using a thermometer or thermocouple, and the temperature plotted against time to give a graph from which fundamental quantities can be calculated. Modern calorimeters are frequently supplied with automatic devices to provide a quick read-out of information, one example being the DSC or differential scanning calorimeter.
Systems

Several thermodynamic definitions are very useful in thermochemistry. A system is the specific portion of the universe that is being studied. Everything outside the system is considered the surrounding or environment. A system may be: an isolated system — when it cannot exchange energy or matter with the surroundings, as with an insulated bomb reactor; a closed system — when it can exchange energy but not matter with the surroundings, as with a steam radiator; an open system — when it can exchange both matter and energy with the surroundings, as with a pot of boiling water.
Processes

A system undergoes a process when one or more of its properties changes. A process relates to the change of state. An isothermal (same temperature) process occurs when temperature of the system remains constant. An isobaric (same pressure) process occurs when the pressure of the system remains constant. An adiabatic (no heat exchange) process occurs when no heat exchange occurs.

Stereochemistry

Stereochemistry, a subdiscipline of chemistry, involves the study of the relative spatial arrangement of atoms within molecules. An important branch of stereochemistry is the study of chiral molecules.

Stereochemistry is also known as 3D chemistry because the prefix "stereo-" means "three-dimensionality"

Stereochemistry is a hugely important facet of chemistry and the study of stereochemical problems spans the entire range of organic, inorganic, biological, physical and supramolecular chemistries.

Stereochemistry includes methods for determining and describing these relationships; the effect on the physical or biological properties these relationships impart upon the molecules in question, and the manner in which these relationships influence the reactivity of the molecules in question (dynamic stereochemistry).

Louis Pasteur could rightly be described as the first stereochemist, having observed in 1849 that salts of tartaric acid collected from wine production vessels could rotate plane polarized light, but that salts from other sources did not. This property, the only physical property in which the two types of tartrate salts differed, is due to optical isomerism. In 1874, Jacobus Henricus van 't Hoff and Joseph Le Bel explained optical activity in terms of the tetrahedral arrangement of the atoms bound to carbon.

One of the most infamous demonstrations of the significance of stereochemistry was the thalidomide disaster. Thalidomide is a drug, first prepared in 1957 in Germany, prescribed for treating morning sickness in pregnant women. The drug however was discovered to cause deformation in babies. It was discovered that one optical isomer of the drug was safe while the other had teratogenic effects, causing serious genetic damage to early embryonic growth and development. In the human body, thalidomide undergoes racemization: even if only one of the two stereoisomers is ingested, the other one is produced. Thalidomide is currently used as a treatment for leprosy and must be used with contraceptives in women to prevent pregnancy-related deformations. This disaster was a driving force behind requiring strict testing of drugs before making them available to the public.

Cahn-Ingold-Prelog priority rules are part of a system for describing a molecule's stereochemistry. They rank the atoms around a stereocenter in a standard way, allowing the relative position of these atoms in the molecule to be described unambiguously. A Fischer projection is a simplified way to depict the stereochemistry around a stereocenter.

Types of stereoisomerism are:

    * Atropisomerism
    * Cis-trans isomerism
    * Conformational isomerism
    * Diastereomers
    * Enantiomers
    * Rotamers

Quantum chemistry

Quantum chemistry is a branch of theoretical chemistry, which applies quantum mechanics and quantum field theory to address problems in chemistry. The description of the electronic behavior of atoms and molecules as pertaining to their reactivity is one application of quantum chemistry. Quantum chemistry lies on the border between chemistry and physics. Thus, significant contributions have been made by scientists from both fields. It has a strong and active overlap with the field of atomic physics and molecular physics, as well as physical chemistry.

Quantum chemistry mathematically describes the fundamental behavior of matter at the molecular scale, but can span from elementary particles such as electrons (fermions) and photons (bosons) to the cosmos such as star-formation. It is, in principle, possible to describe all chemical systems using this theory. In practice, only the simplest chemical systems may realistically be investigated in purely quantum mechanical terms, and approximations must be made for most practical purposes (e.g., Hartree-Fock, post Hartree-Fock or Density functional theory, see computational chemistry for more details). Hence a detailed understanding of quantum mechanics is not necessary for most chemistry, as the important implications of the theory (principally the orbital approximation) can be understood and applied in simpler terms.

In quantum mechanics the Hamiltonian, or the physical state, of a particle can be expressed as the sum of two operators, one corresponding to kinetic energy and the other to potential energy. The Hamiltonian in the Schrödinger wave equation used in quantum chemistry does not contain terms for the spin of the electron.

Solutions of the Schrödinger equation for the hydrogen atom gives the form of the wave function for atomic orbitals, and the relative energy of the various orbitals. The orbital approximation can be used to understand the other atoms e.g. helium, lithium and carbon.


History

The history of quantum chemistry essentially began with the 1838 discovery of cathode rays by Michael Faraday, the 1859 statement of the black body radiation problem by Gustav Kirchhoff, the 1877 suggestion by Ludwig Boltzmann that the energy states of a physical system could be discrete, and the 1900 quantum hypothesis by Max Planck that any energy radiating atomic system can theoretically be divided into a number of discrete energy elements ε such that each of these energy elements is proportional to the frequency ν with which they each individually radiate energy, as defined by the following formula:

    \epsilon = h \nu \,

where h is a numerical value called Planck’s Constant. Then, in 1905, to explain the photoelectric effect (1839), i.e., that shining light on certain materials can function to eject electrons from the material, Albert Einstein postulated, based on Planck’s quantum hypothesis, that light itself consists of individual quantum particles, which later came to be called photons (1926). In the years to follow, this theoretical basis slowly began to be applied to chemical structure, reactivity, and bonding.
Electronic structure

The first step in solving a quantum chemical problem is usually solving the Schrödinger equation (or Dirac equation in relativistic quantum chemistry) with the electronic molecular Hamiltonian. This is called determining the electronic structure of the molecule. It can be said that the electronic structure of a molecule or crystal implies essentially its chemical properties. An exact solution for the Schrödinger equation can only be obtained for the hydrogen atom. Since all other atomic, or molecular systems, involve the motions of three or more "particles", their Schrödinger equations cannot be solved exactly and so approximate solutions must be sought.
Wave model
The foundation of quantum mechanics and quantum chemistry is the wave model, in which the atom is a small, dense, positively charged nucleus surrounded by electrons. Unlike the earlier Bohr model of the atom, however, the wave model describes electrons as "clouds" moving in orbitals, and their positions are represented by probability distributions rather than discrete points. The strength of this model lies in its predictive power. Specifically, it predicts the pattern of chemically similar elements found in the periodic table. The wave model is so named because electrons exhibit properties (such as interference) traditionally associated with waves. See wave-particle duality.
Valence bond


Although the mathematical basis of quantum chemistry had been laid by Schrödinger in 1926, it is generally accepted that the first true calculation in quantum chemistry was that of the German physicists Walter Heitler and Fritz London on the hydrogen (H2) molecule in 1927. Heitler and London's method was extended by the American theoretical physicist John C. Slater and the American theoretical chemist Linus Pauling to become the Valence-Bond (VB) [or Heitler-London-Slater-Pauling (HLSP)] method. In this method, attention is primarily devoted to the pairwise interactions between atoms, and this method therefore correlates closely with classical chemists' drawings of bonds.
Molecular orbital


An alternative approach was developed in 1929 by Friedrich Hund and Robert S. Mulliken, in which electrons are described by mathematical functions delocalized over an entire molecule. The Hund-Mulliken approach or molecular orbital (MO) method is less intuitive to chemists, but has turned out capable of predicting spectroscopic properties better than the VB method. This approach is the conceptional basis of the Hartree-Fock method and further post Hartree-Fock methods.
Density functional theory


The Thomas-Fermi model was developed independently by Thomas and Fermi in 1927. This was the first attempt to describe many-electron systems on the basis of electronic density instead of wave functions, although it was not very successful in the treatment of entire molecules. The method did provide the basis for what is now known as density functional theory. Though this method is less developed than post Hartree-Fock methods, its significantly lower computational requirements (scaling typically no worse than n3 with respect to n basis functions) allow it to tackle larger polyatomic molecules and even macromolecules. This computational affordability and often comparable accuracy to MP2 and CCSD (post-Hartree–Fock methods) has made it one of the most popular methods in computational chemistry at present.
Chemical dynamics

A further step can consist of solving the Schrödinger equation with the total molecular Hamiltonian in order to study the motion of molecules. Direct solution of the Schrödinger equation is called quantum molecular dynamics, within the semiclassical approximation semiclassical molecular dynamics, and within the classical mechanics framework molecular dynamics (MD). Statistical approaches, using for example Monte Carlo methods, are also possible.
Adiabatic chemical dynamics

In adiabatic dynamics, interatomic interactions are represented by single scalar potentials called potential energy surfaces. This is the Born-Oppenheimer approximation introduced by Born and Oppenheimer in 1927. Pioneering applications of this in chemistry were performed by Rice and Ramsperger in 1927 and Kassel in 1928, and generalized into the RRKM theory in 1952 by Marcus who took the transition state theory developed by Eyring in 1935 into account. These methods enable simple estimates of unimolecular reaction rates from a few characteristics of the potential surface.
Non-adiabatic chemical dynamics


Non-adiabatic dynamics consists of taking the interaction between several coupled potential energy surface (corresponding to different electronic quantum states of the molecule). The coupling terms are called vibronic couplings. The pioneering work in this field was done by Stueckelberg, Landau, and Zener in the 1930s, in their work on what is now known as the Landau-Zener transition. Their formula allows the transition probability between two diabatic potential curves in the neighborhood of an avoided crossing to be calculated.
Quantum chemistry and quantum field theory

The application of quantum field theory (QFT) to chemical systems and theories has become increasingly common in the modern physical sciences. One of the first and most fundamentally explicit appearances of this is seen in the theory of the photomagneton. In this system, plasmas, which are ubiquitous in both physics and chemistry, are studied in order to determine the basic quantization of the underlying bosonic field. However, quantum field theory is of interest in many fields of chemistry, including: nuclear chemistry, astrochemistry, sonochemistry, and quantum hydrodynamics. Field theoretic methods have also been critical in developing the ab initio Effective Hamiltonian theory of semi-empirical pi-electron methods