Second Law of Thermodynamics-Entropy

Second Law of Thermodynamics- Entropy & Axiomatic definition, Microststes & Canonical interpretation

The Second Law of Thermodynamics provides for a precise description of a property called entropy. Entropy can be seen as a measure of how close a system is to equilibrium. It can also be seen as a measure of (spatial and thermal) disturbance.

The second thermodynamics notes that the entropy, that is, an individual system’s disorder, will never diminish. When an isolated system reaches the maximum entropy configuration, it can no longer change: the equilibrium has been reached. It can be shown that the second law of thermodynamics implies that transferring heat from a lower temperature region to a higher temperature region is impossible if no work is done.

Entropy: It is a function of the state of the system since it has a unique value for each state, independent of how the system reached that state.

ΔS = ΔQ / T

Entropy is an inherent STD property primarily linked to the measurable parameters that define it.

dS = dQ / T where,

dS: entropy of STD.

dQ: exchange of thermal energy between the medium and STD.

T: the temperature at which the thermal energy exchange between the medium and STD is recorded.

The expression allows the calculation of variations but not the knowledge of absolute values.

Entropic variation in any STD and its environment considered together is positive, tending to zero in reversible processes.

ΔS Total Δ0 (irreversible process)

ΔS = 0 (reversible process)

Contents

Entropy as probability:

Entropy increases represent a rise in molecular disorder.

In thermodynamic processes, the second law of thermodynamics introduces an additional condition. It is not enough for the energy to be conserved and thus to comply with the first principle, the system that does work in violation of the second law is called a “second-rate perpetual mobile,” Since it could continuously draw power to conduct work in a hot environment from a cold environment.

1st law: STD and environment ΔE STD + ΔE Environment = 0

1st law: STD ΔE STD = 0

2nd law: STD and environment ΔS STD + ΔS Environment Δ0

2nd law: STD ΔS STD = 0

When ΔS STD = 0 , the system is in equilibrium, and there are no transformations between the different types of energy.

When ΔS STD > 0, it is an unbalanced process and tending towards equilibrium, always with

ΔE STD = 0 .

This is one of physics’ most important laws; while they can be interpreted in several ways, they all contribute to explaining the principle of irreversibility and entropy. When viewed by other branches of physics, particularly by statistical mechanics and information theory, this latter definition is related to the degree of disorder of matter and energy of a system. Meanwhile, there is no physical explanation for entropy in thermodynamics, which is correlated with the sum of unusable energy in a system.

However, this merely phenomenological interpretation of entropy is entirely consistent with its statistical interpretations. So, the Second Law of Thermodynamics dictates that although matter and energy cannot be created or destroyed, they transform and establish the sense in which this transformation occurs. However, the second law of thermodynamics capital point is that, as with all thermodynamic theory, it refers solely to states of equilibrium.

Any definition, analogy, or concept that is extracted from it can only be applied to equilibrium states, so formally, parameters such as temperature or entropy itself will be defined only for equilibrium states. Thus, according to the second law of thermodynamics, if you have a system that goes from the equilibrium state A to the equilibrium state B, the amount of entropy in the equilibrium state B will be as high as possible and inevitably greater than the equilibrium state A.

Obviously, the mechanism can only operate when it is in the transition from a state of equilibrium A to B, and not in one of those states. If the system was closed, however, its energy and volume of matter couldn’t differ, if entropy is to be maximized at each transition from one equilibrium state to another. However, when you try to fuse the Helium nuclei, you fail to release the same amount of energy you obtained fusing the hydrogen nuclei. 

Every time the star fuses the nuclei of an element, it gets another one that is more useless for energy, and therefore the star dies. In that order of ideas, The matter it leaves behind won’t produce another star anymore. That is how the second law of thermodynamics was used to describe the end of the universe.

Introduction to the Thermodynamics of Materials 6th Edition

Buy on Amazon- https://amzn.to/2GyuPLi

q? encoding=UTF8&ASIN=1498757006&Format= SL250 &ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=insightanalys 20&language=en USir?t=insightanalys 20&language=en US&l=li3&o=1&a=1498757006

Go to Amazon

Axiomatic definition

The formal definition of the second law of thermodynamics states that:

In an equilibrium state, the values ​​that the characteristic parameters of a closed thermodynamic system take are, that they maximize the value of a certain magnitude that is a function of these parameters, called entropy.

The entropy of a system is an abstract physical quantity that statistical mechanics identifies with the degree of a physical system’s internal molecular disorder. Classical thermodynamics, on the other hand, describes it as the relationship between the heat transferred and the temperature at which it is transmitted. 

Axiomatic thermodynamics defines entropy as a certain function –a priori, in an unknown way –, This depends on the system’s so-called “characteristic parameters,” and that can be specified only for the system’s equilibrium states.

Such characteristic parameters are defined by a postulate derived from the first principle of thermodynamics, also referred to as the State principle. 

Accordingly, the equilibrium state of a system is defined by the internal energy, volume, and molar composition of the system. Any other thermodynamic parameter is known as a function of these parameters, such as temperature or pressure. Thus entropy will be a function of these parameters as well.

The second law of thermodynamics establishes that said entropy can only be defined for states of thermodynamic equilibrium and that of all the possible states of equilibrium –which will be defined by the characteristic parameters– only one can be given which, of all of them, maximizes the entropy.

The consequences of this statement are subtle: when considering a closed system that tends to balance, all those which are compatible with the limits or contours of the system are included in the possible equilibrium states. For example, the starting state of equilibrium between them, if the system varies its equilibrium state from the starting state to another, it is because the entropy of the current state is greater than that of the original state. If the system changes its equilibrium, it can only increase its entropy. The entropy of a thermodynamically insulated system can, therefore, only be increased. 

Assuming the universe started from an equilibrium state, that at every instant of time the universe does not stray too far from thermodynamic equilibrium and that the universe is an isolated system.

In the universe, the amount of entropy is continuing to increase over time.

Nevertheless, axiomatic thermodynamics does not accept time as a thermodynamic component. 

Entropy and the Tao of Counting: A Brief Introduction to Statistical Mechanics and the Second Law of Thermodynamics

Buy on Amazon- https://amzn.to/2F34vbK

q? encoding=UTF8&ASIN=3030354598&Format= SL250 &ID=AsinImage&MarketPlace=US&ServiceVersion=20070822&WS=1&tag=insightanalys 20&language=en USir?t=insightanalys 20&language=en US&l=li3&o=1&a=3030354598

Go to Amazon

Formal, entropy can be defined only for equilibrium conditions. There are no balancing states in the process that travel from one equilibrium state to another, but entropy can not be described in these non-equilibrium states without formal contradictions within the thermodynamics themselves. Thus entropy can not be a function of time, and it is technically incorrect to talk of variations of it in time.

As it is done, it is because it is believed that in the transition of one state of equilibrium to another, one has gone through infinite intermediate equilibrium states, a procedure that allows time to be entered as a parameter. As long as the final equilibrium state is that of maximum possible entropy, a frontal inconsistency will not have been incurred because these intermediate equilibrium states have not affected the only real one (the final one).

The classical formulation argues that the change in entropy S is always greater than or equal to – exclusively for reversible processes – that the heat transfer Q produced divided by the equilibrium temperature T of the system.

dSgeqfrac {delta Q} {T}!

General description

The axiomatic statement of the second principle immediately reveals its main characteristic. It is one of Physics’ few ontological laws, while it generally distinguishes those physical processes and statements that are possible from those that are not; that is, the second law of thermodynamics allows the possibility of a process or state to be determined. 

In reality, the second law of thermodynamics originated in a historical sense, in the context of thermal machines, in the midst of the Industrial Revolution as an empirical explanation of why they acted in some way and not in another. Indeed, although it may seem trivial, it was always observed, for example, that to heat a boiler it was necessary to use fuel-burning at a higher temperature than that of the boiler.

However, the boiler was never observed to heat up by taking energy from its surroundings, which in turn would cool down. It could be reasoned that, by the first principle of thermodynamics, nothing prevents the spontaneous transfer of heat from a cold body, e.g., at 200 K, from transferring it to another hot body, e.g., at 1000 K: it suffices that the appropriate energy balance is achieved, which would cool the cold body even more, and the hot body would heat up even more. 

All this, though, runs counter to all experience. While it seems simple and even insignificant, it has had an exceptional effect on the machines used in the Industrial Revolution. If it had not been so, for example, the machines could work without requiring fuel, because the necessary energy could be transferred spontaneously from the rest of the environment.

Thermal machines, however, seemed to follow a certain law which materialized in the second law of thermodynamics: to produce mechanical work, additional energy (fuel) had to be supplied, which in effect was always greater than the amount of work produced. Thus the idea of the thermal machine is closely linked to the initial declaration of the second law of thermodynamics.

A thermal machine is one which,thanks to the difference in temperature between two bodies, provides effective work. Since any thermodynamic machine requires a difference in temperature, it follows that no useful thermal equilibrium work can be extracted from an insulated system. 

That is that the external power supply will be required. This empirical theory, derived from the continuous study of how the universe functions, constitutes one of the first statements of the Second law of thermodynamics: any cyclical process is impossible whose only result is the absorption of energy in the form of heat from a thermal focus (or thermal reservoir or deposit), and the conversion of all this energy in the form of heat into energy in the form of work.

Classical statements

In several different ways, the second law of thermodynamics has been expressed. In short, it has thus been expressed by classical thermodynamics:

“It is impossible a process whose only result is the transfer of energy in the form of heat from a lower temperature body to a higher temperature body.”

Clausius statement

“It is impossible any cyclical process whose only result is the absorption of energy in the form of heat from a heat source (or reservoir or thermal deposit), and the conversion of all this energy in the form of heat into energy in the form of work.”

Kelvin-Planck statement

“For any potentially cyclical system, a single heat transfer is impossible such that said process is reciprocal and eventually reversible.”

Statement of John De Saint

Some corollaries of the principle, sometimes used as alternative statements, would be:

“No cyclical process is such that the system in which it occurs and its environment can both return to the same state from which they started.”

“In an isolated system, no process can occur if a decrease in the total entropy of the system is associated with it.”

Corollary of the beginning, due to Clausius

Visually, the second law of thermodynamics can be expressed by imagining a boiler on a steamboat. It could not produce work if it were not because the steam is at high temperatures and pressure compared to the surrounding environment.

Mathematically, it is expressed thus:

frac {dS} {dt} geq 0 qquad mbox {(1)}

where S is the entropy, and the equality symbol only exists when the entropy is at its maximum value (in equilibrium).

Entropy in statistical mechanics

Thermodynamics does not give a physical explanation of what entropy is: it describes it simply as a mathematical function that takes its maximum value for each equilibrium state. The normal definition of entropy with molecular disorder comes from a simplistic understanding of statistical mechanics; in particular, statistical mechanics so-called microcanonical formalism. It is important to note that although related, thermodynamics and statistical mechanics are distinct branches of physics.

Microcanonical interpretation of entropy based on the second law of thermodynamics

The fundamental equation of a closed equilibrium thermodynamic system can be expressed as

S = S ( U , V , N_1 , N_2 ,..., N_r ) ,

Where S represents the system’s entropy – from a thermodynamic point of view- -, U the internal energy of the system, and N1, N2, etc. the number of moles of each component of the system. All these magnitudes are macroscopic. They are represented and can be measured and estimated without taking into account the microscopic existence of the thermodynamic system (that is, of the atoms , molecules, etc.).

It may seem intuitively appropriate to conclude that if the system is in equilibrium, then there are also its most basic elements, its atoms, and molecules. Nevertheless, a basic consequence of quantum mechanics states that if the system is macroscopic, then for its atoms and molecules there can be a multitude of discrete quantum states that are globally consistent with U, V, and N1, N2, … Of the macroscopic system. In principle, however, even if there is such a potential ability for the system’s microscopic components to move from one quantum state to another since the system is closed and in equilibrium, it could be reasoned that such transitions will not occur.

Today, no independent machine is complete. For eg, even if we can insulate the system thermally in an absolute way, we will not be able to avoid the gravitational effects that the rest of the universe will continue to have on the matter that we have enclosed within; nor can it isolate itself perfectly from all the electromagnetic fields that surround it, no matter how weak they may be.

In short, the system may be closed to macroscopic effects, but the action of all kinds of force fields (be they gravity, electrical, …) and the system’s interaction with the walls that enclose it will, at least from microscopically, the system is not in equilibrium. Atoms and molecules undergo continuous transformations from one quantum to another, the causes of which are,

Statistical mechanics find that a macroscopic system enables extremely rapid and spontaneous transitions between the various quantum states so that macroscopic measurements of parameters such as temperature, energy, even volume, … are the average from the myriad of quantum or microscopic states. And since essentially random processes produce these transitions, a macroscopic system is accepted as a principle that visits all the permissible microscopic states with equal probability. Such allowable microscopic states are called microstates.

The number of microstates allowed for each macroscopic equilibrium state is determined by the laws of physics. For instance, if a macroscopic system has 1000 joules of energy, it is unreasonable to suppose a microstate of that system might have more than 1000 joules of energy.

If a macroscopic state of equilibrium is considered, according to the second law of thermodynamics, it will be defined by the values ​​of the thermodynamic variables U, V, N1, N2, etc. for which the entropy S takes its maximum value among all the possible ones. Suppose we have a thermodynamic equilibrium system defined by a fundamental limitation: the system is not permitted to have a volume greater than a given volume.

What was given at the start is the amount of matter in the system. Gas in a gas cylinder, for example, can not have a larger volume than the cylinder, nor can there be more gas inside than what has been placed. Considering this limitation of volume and mass, the system will acquire values ​​of U such that they maximize entropy, so then you get to the macroscopic balance. 

Associated with this macroscopic state of equilibrium, we have that of the microstates: within limits imposed by the system itself, the system’s molecules that present random transitions between different micros States. For example, they can not travel beyond the boundaries of the system, nor can they vibrate with an energy greater than the macroscopic system’s total energy, etc. that is, associated with macroscopic equilibrium. 

There is a minimal, but potentially enormous, number of microscopic states that the system’s microscopic constituents can visit with equal probability. The molecules of the system will be able to present random transitions between different microstates within limits imposed by the system itself.

They cannot, for example, move beyond the system’s barriers, nor may they vibrate with an energy greater than the macroscopic system’s total energy, etc. That is, in accordance with macroscopic equilibrium, there is a small but probably enormous number of microstates that the system’s microscopic constituents can visit with equal probability. 

The molecules of the system will be able to present random transitions between different microstates within limits imposed by the system itself. For example, they can not travel beyond the boundaries of the system, nor can they vibrate with an energy greater than the macroscopic system’s total energy, etc. That is, in accordance with macroscopic equilibrium, there is a small but probably enormous number of microstates that the system’s microscopic constituents can visit with equal probability.

When we now eliminate a macroscopic device constraint, such as allowing the volume now to be greater than before, two things will happen:

  • From thermodynamics, that is, from the macroscopic point of view, the variables of the system will evolve towards a state of greater entropy: Volume V is now greater than before, and even though the quantity of matter is the same, it may now take up more space. The internal energy of the system U will thus vary, so that in the new state of equilibrium, the entropy S takes the maximum possible value. 

Said value is necessarily greater than that of the previous equilibrium state. Indeed, we may conceive of the situation in which the machine stays in its previous range, while it that, with the same internal energy and the same subject matter. The entropy should not have shifted in that situation and that case is consistent with device limitations. 

We do know, however, that nature will not operate like this: the device will continue to fill the entire volume (even though it is a solid, in which case the vapour pressure of the solid will change, or more solid will evaporate, etc.), and the equilibrium will change. The entropy function is the mathematical function in this new equilibrium that takes its maximum value, which must be greater than in the previous equilibrium state.

  • From a microscopic viewpoint, it happens that the number of microstates consistent with the system’s limits has now increased. We will, in essence, continue to have the same as before, although these are added to new ones. For example, an atom will now be able to move not within the previous volume, but also within the entire new volume. 

Thus, as the entropy increases, there is an increase in the number of possible microstates. This implies that the number of microstates that are consistent with the macroscopic limitations of the system will define entropy. Since the microstates are the product of chance, and the probability that each of them occurs the same, it is natural to identify entropy with the microscopic disorder.

There’s only one problem: Entropy is an additive, according to thermodynamics. That is to say, the entropy of two equal systems is twice the entropy of each. However, it’s multiplicative in the number of potential microstates. The number of microstates of the two systems is the product of each microstate number. For example, the number of two dice ‘microstates’, if the number of each one is 6 (each face of the die is a possible microstate), is 6 × 6 = 36 microstates (having a “1” in the first one, a “3” in the second, a “2” in the first, a “5” in the second, etc.). To interpret entropy, we will need to get the number of microstates to fulfill an additive rule.

The only solution to this is to identify the entropy with the number of possible microstates’ logarithm. 

 S = k_b ln Omega,

Where kB is the Boltzmann constant and appears simply to determine the scale of entropy, which is usually given as energy per degree of temperature (J / K). However, according to this interpretation, it may have no units.

Canonical interpretation

The microcanonical interpretation of entropy conceives an isolated thermodynamic system, that is, a thermodynamic system that does not exchange either matter or energy or volume with the outside: the composition of the system, given by N1, N2, …, its internal energy U and its volume V do not change in it. The system par excellence that meets these conditions is the universe itself. However, on many occasions, systems that exchange energy, mass, or volume with their environment are contemplated.

In such cases, the mathematical interpretations of entropy must be expanded, but globally it is the microcanonical understanding that lasts. Nevertheless, if we consider a system that exchanges matter with its environment, for example, we can conceive of a larger system that includes the initial system and its environment such that the global system conforms to the micro-canonical interpretation; that system would be the universe itself, at the limit. And it is precisely the entropy of the microcanon system that is subject to the second law of thermodynamics

, that is, the one that needs to increase as the global equilibrium of the system varies.

This could then be assumed that any system can be viewed by conceiving the global structure that is subject to microcanonical interpretation, regardless of the conditions of interaction with its environment. Nonetheless, the equilibrium state should be obtainable in theory by taking into account the total number of microstates in the global system.

However, this can be very costly if not practically impossible to estimate in most circumstances: Combinatorial calculations of the number of ways in which the available energy in a system can be distributed are often beyond mathematical knowledge. And it is to solve those deficiencies that the other interpretations of entropy arise.

The canonical interpretation, also called canonical or Helmholtz formalism concerns a thermodynamic device that can exchange energy with a thermal reservoir or thermostat. Accordingly, By having an infinite source of energy, each energy state will be conceivable, from the lowest to the highest.

Like the microcanonical method, though, the likelihood of any of those states won’t be the same: the system won’t be the same fraction of time in each of those states. The cornerstone of canonical formalism is to evaluate the microstate probability distribution. And this problem is solved considering that the global system formed by the thermostat and the system at issue is a closed system, that is,

If the total energy of the global system is E tot , and that of a microstate of the local system is E j, since the local system is in a state of energy E j, the thermostat will inevitably be reduced to one of energy E tot – E j . The probability that the global system is in a microstate such that the thermostat has energy E tot – E j and the local system E j will be:

P_j = frac {Omega_ {thermostat} (E_ {tot} - E_j)} {Omega_ {tot} E_ {tot}}

Following Boltzmann’s definition of entropy, this equation can be written as:

P_j = frac {e ^ {frac {S_ {thermostat}} {k_B} (E_ {tot} - E_j)}} {e ^ {frac {S_ {tot}} {k_B} E_ {tot}}},

The internal energy U will be the average value of the local system’s energy, so, since the entropy is additive, it can be written that:

S_ {tot} (E_ {tot}) = S (U) + S_ {thermostat} (E_ {tot} - U),

If it is developed in series S_ {thermostat} (E_ {tot} -E_j), we will have to:

S_ {thermostat} (E_ {tot} -E_j) = S_ {thermostat} (E_ {tot} - U + U - E_j) = S_ {thermostat} (E_ {tot} - E) + frac {(U - E_j) } {T}

Thus, the probability can be expressed as:

P_j = frac {e ^ {frac {UT S (U)} {k_B}}} {e ^ {frac {T} {k_B} E_j}},

And since itF = UT S (U), is the free energy of Helmholtz, we can express this probability as:P_j = e ^ {F F} e ^ {-beta E_j}, ,

 ,

where. beta = frac {1} {k_B T} .

The total probability of being in any of these states is unity, so:

sum_j P_j = e ^ {beta F} sum_j e ^ {- beta E_j} = 1, , where is defined

e-beta F = Z,.

.

Z is the so-called canonical partition function, generally defined as:

Z = sum_j e ^ {- beta E_j},

If the partition function Z is known for a thermally balanced particle system, the entropy can be calculated by:

S = k_B beta ^ 2 frac {partial F} {partial beta} = frac {partial} {partial T} (k_B T ln Z) = k_B sum_j P_jln P_j

Where kB is the Boltzmann constant, T the temperature, and the probabilities Pj.

This is the interpretation of entropy, called Helmholtz’s canonical interpretation or entropy.

Von Neumann entropy in quantum mechanics

 S (rho), =, -k_B {rm Tr} (rho, {rm ln} rho),

The definition of entropy was introduced in the 19th century to structures made up of various particles which are classically acting. Von Neumann extended the definition of entropy for quantum particle systems at the beginning of the 20th century, defining it for a mixed state characterized by a density matrix? From quantum entropy by Neumann as the scalar magnitude:

Generalized Entropy in General Relativity

The effort to extend traditional thermodynamic analyzes to the entire universe led in the early 1970s to investigate the thermodynamic behaviour of objects such as black holes. The preliminary outcome of this study showed something very interesting: in the case of black holes, the second law, as it had been formulated conventionally for classical and quantum systems, maybe broken.

Nonetheless, Jacob D. Bekenstein ‘s work on information theory and black holes indicated that a generalized entropy (S gen) applied to standard entropy (S Conv) would still be true if the second law of thermodynamics were implemented.,attributable black holes entropy which depends on the total area (A) of black holes in the universe. This generalized entropy specifically has to be defined as:

S_ {gen} = S_ {conv} + frac {kc ^ 3} {4Ghbar} A

Where k is the constant of Boltzmann, c is the speed of light, G is the constant of universal gravitation, and Planck is the constant of rationalization.

Bibliography www.wikipedia.org

Leave a Comment

Your email address will not be published. Required fields are marked *